Final Project Comments

Please write any last comments/suggestions for your classmates’ final projects.

6 thoughts on “Final Project Comments

  1. Mike, Dan and Rosemary – The presentation was really good with the animation. I enjoyed it.
    I m just a little worried about how the hazard detection visuals might obstruct the view at some point. I mean that the red boxes shown on the visual feedback might be too much on the glasses just in case if the user is moving fast. But I think with proper training, it can be fixed. Overall, I liked the presentation.

    Alicia and Colin- I liked how the visual for the system menu display is set now. It looks much simple and nice. One of the comments I mentioned before was that the menu was too crowded with words and the new one looks much better this way.

  2. Expanding Aira:
    It wasn’t completely clear to me what the proposed design was. Are you proposing Chloe as a whole, or does Chloe already exist and you are proposing just the ice detection?
    I also feel like there are some areas to explore:
    – What happens if the user has ice detection on and they enter a building?
    – What happens if it’s poor visibility and the system can’t tell if there is or is not ice (like if it was snowing)?

    Compensatory Augmentation:
    You mention audible beeps or vibrations to indicate errors or alert to some hazard, but is there anything beyond those beeps? What if it is not clear to the user what the hazard or error is? What if they forgot their phone and didn’t remember that the long vibrations meant the phone was not connected?
    I have some concerns about there being 3 technology devices with the target user group being 65+. Particularly since one of the defining criteria of this group is potential cognitive impairment. The expectation that these individuals would reliably remember all 3 components, and know how to effectively use them seems like a potential pitfall.

  3. Meetha/James:
    Overall I think you had a strong concept , I would have appreciated a deeper looking into options that could save the user money. Where the user has to pay for a service and then have to spend time, i.e. their money, describing the situation to get assistance seems a little like double-dipping.

    Colin/Alicia:
    Have either of you operated a current ‘smart car’? One with some of the options that you mentioned, lane assist, braking, speed sensing cruise control? Did you do any research on the current voice command options? Alexa in the car or the vehicles that use voice control through the stereo?
    The concern that I had was regarding the legal/insurance implications for some of the features you described such as driver analysis report and the learning system based on the driver. What protections would the driver/owner have against his/her rates being increased based on that data? Who stores/protects that data?

  4. Alicia & Colin – Autonomous Vehicles and Trust
    I would have liked to see a little on how the physical buttons would interact with the visual/touchscreen interface. Would all options be accessible through the physical buttons? Do the physical buttons navigate the visual interface or do they have specific functions (like volume,seek, etc buttons on a radio). I just am more curious about that aspect because I have a passionate hate for visual displays in confined spaces like cars, because the viewing angles and manipulating them can be annoying, so I would be more likely to use the physical options.

    I liked the visual touchscreen interface, it looked very intuitive and had the relevant information on screen without overwhelming with data/options. I felt a bit of the opposite for the audio interface, potentially because it felt a little too tied to the visual interface. I have just found that when using visual, I like having a lot of options available on the same screen, and with audio I prefer having a denser menu system. This is mostly just based on that I hate listening to long lists of options. I much prefer audio interfaces that can use natural language to give very specific responses (Alexa type) or audio menu trees with more layered menu interfaces that only have 2-3 options per menu. Just an example, when I call a business and get automated voice service, the worst are always the ones that have options 1-9 at the first menu, rather than having something like {1-New Customers 2-Existing Customers}-2>{1-Order Support 2-Tech Support 3-Return}. I just find this allows me to limit listening to options that don’t pertain to my current needs.

    Dan, Rosemary, & Mike – Compensatory Augmentations for Older Adults

    The video showed many situations where the technology performed perfectly, but did not show recovery from error. For example, if a certain object was consistently being read as a hazard, but was not, would the user have the option to report the false positive to the system and be prompted to retake the photos for that environment?

    A potential barrier to use for this system would be the slideshow/image selection screen for when you are at a location and need to select the correct location. For example, if you are very visually impaired and your caregiver initially setup the images, then it may be difficult for you to select the correct location. Something like tagged items in your most common locations (specific patterned vase at your house, specific photo on your desk at work) that the system could use to automatically identify your location from the images would help a lot. Also utilizing GPS to narrow down the options for the user so they are only shown the locations they could feasibly be in would streamline the process for the user.

  5. For Meetha & Pascal, on the ice detection proposal, I can see you’ve put a lot more consideration and detail into the design since the last time you presented it. Although I didn’t get a sense of the current real-life capabilities or limitations of ice-detection through image analysis, if we take that technology as a given, your implementation seems like a pretty complete package. My only remaining concerns would be, first, the fact that it is on continuously once activated, even if there is no longer a chance of encountering ice (inside, warmer weather, etc.). Second, the continuous uploading of high-resolution images to the cloud will require a pretty sizable data plan, and relies completely on having at least a 3G connection at all times that it’s in use. What will the system do if it loses its connection to the cloud, or if the upload speed is too low for real-time image processing? Lastly, what are the privacy, security, and legal concerns of all these images being sent to the cloud? Images uploaded will undoubtedly contain other people, and lots of valuable metadata. Will they be encrypted? Who owns the images and what restrictions would there be on their use & storage? Would they be subject to law-enforcement subpoena? What happens if a user has an ice accident while using the system? Are images preserved for legal purposes? Maybe AIRA has policies that already cover this sort of thing, but it would be good to mention because it’s a bit of a Pandora’s box. Overall though, great work; I could easily see this becoming a reality.

    For Rosemary, Mike, and Dan, on Compensatory Augmentation for Older Adults; Wow! Super professional job on the presentation. It was very engaging, and a great way to integrate your user stories/storyboards into the presentation as a whole. One suggestion I would offer for the alerts in your animations though, is to make them really highlighted, maybe outlining the whole object and filling it with a bright color for better contrast. I also wonder about your image uploads for the location building, and how you are comparing images. It seems you would need a whole lot of individual images to get full coverage from all possible angles and distances, which would rely heavily on the amount of time and detail-oriented effort that the caretaker is willing/able to invest. This issue made me think of the 360 degree cameras that real estate agents use to build interactive walkthroughs of their properties. There are also 3D scanners that can build detailed models of interior spaces. Take a look at this link; it is just a snarky review of an indiegogo product which claims to be a hybrid of a 360 camera and a 3D camera, but it has a decent analysis of how it works and some great images that give an idea of what I’m talking about. Comparing live views against a 360-degree, 3D model might give more accurate results and locations would be much faster & easier to set up, with more assurance of having complete coverage, since you can see in the 3D model where the gaps are. Also, reducing the amount of time & effort needed to do the imaging means you could easily suggest loading multiple models for different times of day to account for different light/shadows/visibility in the images. https://3dscanexpert.com/wunder360-s1-pocketable-affordable-3d-scanner-rooms-interiors-exteriors-indiegogo/

    Regarding outdoor use of the glasses for hazards, I think this is interesting because earlier in this process I was super skeptical, but now I am able to relate it to what both of the other teams are working with. Something like 3D image analysis in real-time in a dynamic environment (rather than the static/predetermined indoor environment) will be super processing-intensive, and carry all the same security issues, but AI and augmented reality will most likely eventually be able support this kind of thing – if it can be done for ice, it can be done for other things. And if a driverless car can keep track of its changing environment and the appropriate reactions, maybe piggybacking on this technology could be a solution, and at least could be a simple way to explain it to your audience.

    Last, regarding medication alerts on the watch, I was hoping to see what those alerts look like, because this is the smallest interface, and the area where you most need lots of detailed info to keep someone safe. I like the idea of adding a photo and description of the pill, but what about the glasses being able to read a barcode or QR to make sure it matches up? Also, how does the system make sure that the wearer correctly enters the fact that they took this or that pill at a certain time? I have used some reminder systems for things like this, and I often get frustrated that even though I got the reminder and did the task, I might forget to update it, so the tracking becomes unreliable. Overall, it’s clear you put lots of time into the project, and addressed many of the potential issues that came up.

  6. Aira – is uploading images to the cloud feasible for rapid response processing? Is the ice detection feature real-time or near real-time? We felt that image processing needs to be done locally to support real-time hazard detection, otherwise the processing is not feasible. Do you have data/research that supports your decision to upload images to the cloud? Did you expand on the idea of using a sensor in a cane or is this just a future R&D project?

    Autonomous Vehicle – I like many ideas of your design. but I question the whole notion of being able to create a “My Style” of driving which directly affects or modifies the driving parameters of the vehicle’s system. I do understand your motivation for wanting to do so. IMO, the autonomous driving system should always be configured to drive defensively, conservative and safe. Putting that comment aside, do you have any research that you can cite that validates whether manufacturers of autonomous driving systems allow or plan to allow an external system to config parameters that influence the motion planning algorithm? This seems fraught with peril to me and would create a huge liability for the manufacturer if anything went wrong with this interface.

Leave a Reply

Your email address will not be published. Required fields are marked *