1. Lorna Quandt
  2. http://www.tinyurl.com/actionbrainlab
  3. Assistant Professor
  4. Signing Avatars & Immersive Learning (SAIL)
  5. https://sites.google.com/gallaudet.edu/action-brain-lab/research/signing-avatars-immersive-learning-sail?authuser=0
  6. Gallaudet University
  1. Melissa Malzkuhn
  2. http://www.motionlightlab.com
  3. Creative Director, Motion Light Lab
  4. Signing Avatars & Immersive Learning (SAIL)
  5. https://sites.google.com/gallaudet.edu/action-brain-lab/research/signing-avatars-immersive-learning-sail?authuser=0
  6. Gallaudet University
  1. Athena Willis
  2. Graduate Student
  3. Signing Avatars & Immersive Learning (SAIL)
  4. https://sites.google.com/gallaudet.edu/action-brain-lab/research/signing-avatars-immersive-learning-sail?authuser=0
  5. Gallaudet University
Facilitators’
Choice
Public
Choice
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 4, 2020 | 08:18 a.m.

    Welcome, visitors! Thank you for watching our video about the SAIL project at Gallaudet University. In the past year we have made major strides toward our goal, which is to create a proof-of-concept ASL learning experience in virtual reality.

    We have completed the motion capture recordings of ASL content, created 3D signing avatars from the recordings and we've built interactive lessons of introductory ASL content. Next up, we will conduct an EEG cognitive neuroscience experiment that will help us see how the "embodied learning" aspect of SAIL influences ASL learning. Importantly, this work is conducted by a deaf-led team, and we are committed to holding deaf talent at the center of everything we are doing.

    Through this work, we see great potential for an entirely new way option for learning ASL in the future--from a native ASL user in the comfort of your own home! We welcome questions and comments on any aspect of this project. And again, thank you for your interest. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
  • Icon for: Karl Kosko

    Karl Kosko

    Higher Ed Faculty
    May 4, 2020 | 04:24 p.m.

    Very interesting work!
    I can see this being particularly useful now that hand tracking technologies are becoming more affordable and prevalent (i.e., Oculus Quest). Do you have any thoughts / plans on using this technology in a non-headset context? I can imagine that the VR headset is more immersive and effective, but wonder about programs that may not have access to such devices.

     
    2
    Discussion is closed. Upvoting is no longer available

    Judi Fusco
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 4, 2020 | 04:59 p.m.

    Hi Karl, Great question! Yes, that's definitely on our minds. If we can do this project well in VR, then we would also explore non-immersive options like app or web-based options. Also, we are interested in looking into this as an augmented reality experience as well. Many potential options--and we'd love to know how they all compare in terms of experience and efficacy. --Lorna

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Higher Ed Faculty
    May 4, 2020 | 05:02 p.m.

    Thank you for the response!

    I'm excited to share this with some of my colleagues in our ASL program.

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Andrea Nodal

    Researcher
    May 4, 2020 | 06:43 p.m.

    Hi! This is super interesting and amazing especially right now with issues of social distancing. The recent switch to all online classes has made my ASL professors and other interpreting majors realize how difficult it is to learn this language 2 dimentionally and in a non-interactive setting. My question is, is the signer's signs being perceived by the headset itself? And if so how close to the body can it perceive signs. For example if you are signing PLEASE or CURIOUS would it be able to see those signs?

     
    2
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 4, 2020 | 08:28 p.m.

    Hello, and thank you for this comment. You're totally right--with more an more instruction occurring online, exploring the potential of VR for learning seems very timely. We know that nothing can really compete with in-person classes with a great ASL teacher, but that is not always a feasible option. 

    In the current version of SAIL, users can see their hands represented in VR (by using LEAP motion tracking)--you can see this happening in a couple early scenes in our video. It does present a challenge for body anchored signs (e.g., PLEASE, CURIOUS), but we are aware of that challenge and will continue to work on it as the project continues. Just like with in-person learning, in our system, you wouldn't really be able to see your own production of CURIOUS, but you would be able to see the teacher producing the sign. Anyway--something we will continue to figure out!

     
    1
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
  • Icon for: Mitchell Nathan

    Mitchell Nathan

    Higher Ed Faculty
    May 4, 2020 | 10:28 p.m.

    Your team has done a really amazing job combining VR and embodiment for teaching and learning. I can see from the above that you are considering many future pathways. I wonder most about the technology demands for full motion capture as you consider how to scale this up for a broad user base. As a second question, why go with full VR versus AR that can allow the world around people to be in the image? I am truly asking since I don't yet have a clear answer and see many trade offs. This is something my colleagues and I are also pondering as we consider how to support embodied collaboration among learners and teachers.

     
    2
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 5, 2020 | 09:31 a.m.

    Thank you! Yes, the pipeline for the motion capture process in quite intensive. One of our goals right now is to create very high-quality signing avatars, who can produce native-like ASL. We are now confident that we can do that quite well, but it requires a fair amount of man-power to produce even one lesson. So, as we look to expand the scope of the project, we will continue to confront the quality/efficiency tradeoff. Part of where we land on that will be determined by the purpose of the avatars. For instruction, it is critical that the signing is extremely natural. For other purposes, that may matter less, but flexibility and automation may matter more. 

    For now, we opted to go with VR because we were more comfortable developing in that arena, and interested in seeing the effects of the powerful, immersive experience. However, of course we also see the possibilities in AR--so many options we can explore!

     
    1
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
  • Icon for: Sarah Heuer

    Sarah Heuer

    Undergraduate Student
    May 5, 2020 | 09:22 a.m.

    This is absolutely amazing! Firstly, that this is a deaf-led team is fantastic, and I'm glad to see these innovations driven by the people who understand these kinds of needs the most. Secondly, this idea is something I really connect with! My partner and I were talking yesterday about how they want to learn ASL but our University doesn't have classes. While I suggested some places for online learning, we did talk about the difficulty of learning such a three-dimensional language in an online setting, especially with covid19. Using motion capture technology toward this vision is fantastic and I hope to see continued application of this in the future! 

     
    2
    Discussion is closed. Upvoting is no longer available

    Holly Morin
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 5, 2020 | 11:00 a.m.

    Thank you for your positive comments and encouragement! How cool that you just recently thought about this need in your own life. I love hearing that kind of anecdote. 

  • Icon for: Jacob Sagrans

    Jacob Sagrans

    Researcher
    May 5, 2020 | 12:00 p.m.

    Great video and project, and so important at this time especially when in-person learning isn't an option. I'm curious if you have any thoughts about how motion capture/VR/avatars like yours could extend beyond teaching ASL. I could see this technology being harnessed to teach all sorts of things virtually now. Maybe a virtual science lab, where students could manipulate virtual chemicals or other virtual materials safely from their own homes?

     
    2
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 5, 2020 | 01:26 p.m.

    Yes--absolutely! There are some really cool projects out there in which you can interact with molecules or learn engineering principles in VR. The technology really does have an enormous amount of potential for learning. I think we're only at the tip of the iceberg here in 2020. It has an incredible ability to put you in an entirely new place, even one which is physically impossible. The possibilities are almost unlimited!

     
    2
    Discussion is closed. Upvoting is no longer available

    Judi Fusco
    Jacob Sagrans
  • Icon for: Sheryl Burgstahler

    Sheryl Burgstahler

    Researcher
    May 5, 2020 | 01:41 p.m.

    Great project. Thanks for sharing.

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 5, 2020 | 03:00 p.m.

    Thank you for stopping by, Sheryl! We appreciate the support. 

  • Icon for: Overtoun Jenda

    Overtoun Jenda

    Facilitator
    Assistant Provost and Professor of Mathematics
    May 5, 2020 | 04:37 p.m.

    This is an awesome project. Thanks for working on this. How are students working on your team recruited? Is it a summer project or do you work on this throughout the year? Do you have industry partners yet? 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 5, 2020 | 05:01 p.m.

    Thank you so much for your interest, Overtoun! We recruit student workers through on campus advertising and word of mouth. And we also have excellent PhD students in our Educational Neuroscience program, as well as a team of skilled staff members: 2D/3D artists, human computer engineer, Unity developer, etc. It takes a lot of work and we have assembled an amazing team! 

    We don't have any industry partners yet, but that is in our master plan :-) 

  • Icon for: Leigh Peake

    Leigh Peake

    Informal Educator
    May 5, 2020 | 07:17 p.m.

    This was really fascinating to watch for someone who knows neither ASL nor VR technologies ... Now I want to learn both! I wondered if you've interacted at all with Nick Giudice or Rich Corey at the VEMI lab at UMaine? Might be interesting to compare notes. Meanwhile, I'm interested in the questions above about the trade-offs around the investment needed for VR. That is an earnest project design struggle we all struggle with no matter what level of technology. Thanks for the interesting work & video.

     
    2
    Discussion is closed. Upvoting is no longer available

    Holly Morin
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 6, 2020 | 09:06 a.m.

    Hi Leigh, Thanks for the connections to the VEMI lab. I haven't been in touch with them but that looks like a great connection. 

    Yes--using VR is tricky because it may hold a ton of potential, but also requires a lot of work to create high-quality experiences within, and is also a non-trivial investment for users. It really is a universal struggle--quality vs efficiency, access vs immersion, cost vs richness of the experience. There's a lot to juggle. 

  • Icon for: Rebecca Ellis

    Rebecca Ellis

    Researcher
    May 6, 2020 | 04:50 p.m.

    Greetings! This looks like a very intense project, and you've made great strides! I really like how you modified the helmet to make it more usable and accessible for ASL.

    I'm wondering, do you think you will be able to use this technology to assess how well the students are learning the language? Will students be able to film themselves signing and put it side-by-side with the avatars, or even be able to use tech like the new gaming consoles use (or the updated gloves you use to make the avatars) to judge their movements for accuracy? I'd love to be able to take a class this way, and then get scores based on the accuracy of my signing!

     
    2
    Discussion is closed. Upvoting is no longer available

    Sasha Palmquist
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 6, 2020 | 09:35 p.m.

    Hi Rebecca, great question. Yes, providing corrective feedback is at the top of our list for what comes next. There are many potential options for how to accomplish this, but none that really work well yet. It will take quite a bit of work--but we do hope to incorporate this into the system eventually. As you mentioned, it would be much more helpful to have a sense of your accuracy as you learn, much like you would in a real-life class. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Rebecca Ellis
  • Icon for: Sasha Palmquist

    Sasha Palmquist

    Facilitator
    Senior Manager of Community
    May 6, 2020 | 11:50 p.m.

    This is a truly impressive project! Your video was a fantastic snapshot of the exceptional team and the quality of your work to date. As a DC local, I can't wait until I can stop by your lab for an in-person tour :-) Since that is likely to be a while, I am very interested in the range of age groups with which you have tested this interface. What might the challenges and opportunities be for elementary youth learning as compared to HS or undergraduate populations? 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 7, 2020 | 09:21 a.m.

    Thank you so much for the kind words, Sasha! You are welcome to come visit when the time is right :-) 

     

    We are using this with adults right now, so that we can develop a good proof-of-concept first, and then later turn towards what might need to be adapted for use with younger ages. At the elementary level, we would definitely think about the appearance of the avatar changing (who do children learn best from?), and the speed of presentation. Not to mention, the use of VR with young children might have powerful effects that we're not yet fully aware of. We are tuned into that research and continuing to think about that for future phases of the project. 

  • DIANA MAY

    May 7, 2020 | 11:37 a.m.

    Such very important work, Lorna and team.  Well done all round!  What a shame that sign-language is not the same all over the English-speaking world.  Will you be able to use that technology with BSL?

    Diana May, London

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 8, 2020 | 08:09 a.m.

    Thank you for the comment and support, Diana! The same technology could be used with any sign language in principle, but the team creating the content would have to re-film all of the motion capture data with signs from that language. We hope one day that this concept can be applied to any other sign language.

  • Icon for: Catherine Stimac

    Catherine Stimac

    Executive Producer, Education Productions
    May 7, 2020 | 08:16 p.m.

    Love this! What a creative project all around! 

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 8, 2020 | 02:19 p.m.

    Thank you for the kind words, Catherine. 

     
    2
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
    Catherine Stimac
  • Icon for: Nick Lux

    Nick Lux

    Higher Ed Faculty
    May 8, 2020 | 10:37 a.m.

    What a captivating project with a truly impressive collaboration between different disciplines. 

  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 8, 2020 | 02:19 p.m.

    Thank you! Yes, we are so fortunate to have a wide range of expertises and skills on our team--it allows for so much creativity.

  • Icon for: Sheryl Burgstahler

    Sheryl Burgstahler

    Researcher
    May 8, 2020 | 11:52 a.m.

    Agreed. Nice video presentation.

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Jeanne Reis

    Jeanne Reis

    Facilitator
    Director
    May 8, 2020 | 12:29 p.m.

    Hi Lorna! Good to see you in this forum again. Last year, I shared the ASL Clear project in the showcase, and we chatted about neuroscience terminology.

    Great video, excellent work, by a fabulous group! 

    It looks like your team is tackling two major computer science challenges related to signed languages: making avatars that express ASL in a natural way from head to torso, and computer recognition / translation of the signer's utterances by an avatar in a VR environment. Pretty amazing! 

    Which of those elements has been the most challenging to develop?  

    Have you considered applications such as creating signing characters in video games? 

    You mentioned that you'd be seeking industry partners at some point. Does your lab work in partnership with any other universities or institutions? 

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 8, 2020 | 02:24 p.m.

    Hi Jeanne. Yes, I remember our discussion about ASL Clear--fantastic project! Thank you for your support. 

    The computer recognition/translation part of what you mentioned has been much harder to crack--in fact, we are not actively working on that piece yet. However, it will be critical to have recognition and feedback for ASL learners as we continue to develop the SAIL system. It's a tricky problem! There are hardware solutions which show some promise but they often involve bringing special gloves, sensors, and other physical items into the equation. There are computer-vision type approaches, but we're not aware of any which are hugely successful yet. There is a LOT of work left to do in this arena. 

    We actually just were awarded an NSF INCLUDES Planning grant to try to foster greater collaboration in "sign-related technology" which would include what I described above, and maybe would also be relevant to ASL Clear. I'll add you to the contact list for that project so you may hear from us about that in the near future!

  • Icon for: Jeanne Reis

    Jeanne Reis

    Facilitator
    Director
    May 8, 2020 | 04:32 p.m.

    Please do add me to the contact list, thanks Lorna! 

    Another question just came to mind! In the ASL-learning environment, real-time feedback is important to novice signers, and usually comes in the form of the facial expressions of conversational partners (either comprehension or confusion). Does your team feel that type of feedback need to be included in the design of the VR context? And if so, have you explored any options - even fairly 'low-tech' options like showing students a video stream of themselves signing? 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Researcher
    May 8, 2020 | 01:53 p.m.

    Hi Lorna, all of us at CIRCL remain big fans of your work! Really enjoyed the video.

     
    2
    Discussion is closed. Upvoting is no longer available

    Judi Fusco
    Lorna Quandt
  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 8, 2020 | 02:18 p.m.

    Thanks so much, Jeremy! I really appreciate the support. I'm so glad you enjoyed our video!

  • Annette Walsh

    May 8, 2020 | 03:04 p.m.

    I have been fascinated with the people who are signing on television news reports for various medical authorities or national figures. Their role in providing information for the hearing impaired is so important. I enjoyed Signing Avatars. . 

     
    1
    Discussion is closed. Upvoting is no longer available

    Lorna Quandt
  • Icon for: Holly Morin

    Holly Morin

    Marine Research Associate
    May 9, 2020 | 12:46 p.m.

    This is truly fascinating and an amazing project- thank you for sharing this innovative project!

  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 10, 2020 | 07:27 p.m.

    Thank you so much, Holly!

  • Icon for: Judi Fusco

    Judi Fusco

    Researcher
    May 12, 2020 | 03:03 p.m.

    I had to stop by and say hi and see the new video.  Really clean explanations.  Thanks and hope you all are well!

  • Icon for: Lorna Quandt

    Lorna Quandt

    Lead Presenter
    Assistant Professor
    May 12, 2020 | 03:09 p.m.

    Hi Judi! I am happy to see you here--thanks for stopping by. I am glad you enjoyed the video! I hope all is well with you and everyone at CIRCL. 

  • Icon for: Jeffrey Ram

    Jeffrey Ram

    Researcher
    May 12, 2020 | 05:10 p.m.

    What a neat video.  Our most recent 4-day PD was signed the entire way through.

  • Icon for: Michael I. Swart

    Michael I. Swart

    Researcher
    May 12, 2020 | 07:39 p.m.

    Great work. Can't wait to see this become a tool that every child uses in elementary so that more of our communities can speak ASL. This technology looks very cool, very detailed and very labor intensive.  We also work using motion capture, and are moving towards AR as well, and it was great to find this project amongst this years S4A showcase: https://stemforall2020.videohall.com/presentations/1705.  They seem to be developing AR that can leverage phones.  This would be huge in our work in high schools where all the students have phones, and maybe there is some work they are doing that could complement this great work too.