1. Christina Krist
  2. Assistant professor
  3. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  4. University of Illinois at Urbana-Champaign
  1. Nigel Bosch
  2. https://pnigel.com
  3. Assistant Professor
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. University of Illinois at Urbana-Champaign
  1. Cynthia D'Angelo
  2. Assistant Professor
  3. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  4. University of Illinois at Urbana-Champaign
  1. Elizabeth Dyer
  2. https://www.mtsu.edu/faculty/elizabeth-dyer
  3. Assistant Director
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. Middle Tennessee State University
  1. Joshua Rosenberg
  2. https://joshuamrosenberg.com
  3. Assistant Professor, STEM Education
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
Public Discussion
  • Icon for: Christina Krist

    Christina Krist

    Lead Presenter
    Assistant professor
    May 4, 2020 | 04:32 p.m.

    Thanks for stopping by our video! We are excited about this new project focused on developing analysis tools for researchers. Our team is comprised of both qualitative science and math education researchers and computer scientists. We are all passionate about coming up with creative solutions to the many analytical and ethical challenges facing researchers who want to use video data of STEM classrooms.

    We'll be actively monitoring the discussion and would love to hear your comments and questions! We're especially interested to hear your thoughts in the following areas:

    1. How could you see these methods being useful for the work that you do? 
    2. What aspects of STEM classrooms or learning would be useful to analyze using these tools? 
    3. What concerns come to mind related to ethics that we should consider as we develop these tools? 

    For more info or to contact us, check out our project website: http://tca2.education.illinois.edu

  • Small default profile

    Sarah McCarthey

    Higher Ed Faculty
    May 6, 2020 | 09:58 a.m.

    This is SOOOO cool!

  • Icon for: Christina Krist

    Christina Krist

    Lead Presenter
    Assistant professor
    May 7, 2020 | 03:47 p.m.

    Thanks Sarah!

  • Icon for: Steven Greenstein

    Steven Greenstein

    Facilitator
    May 5, 2020 | 01:05 p.m.

    This is new and exciting territory. Thank you for sharing this work.

    Would you please talk about how you centered analyses of body positioning in relation to math education research? Are you to the point in your research where you've realized its power for informing the teaching and learning of mathematics? In thinking a bit about the possibilities, I'd be curious what an analysis might reveal in a math classroom where its routine for students to be up and moving around, say, alternating between working at a desk, working on the board, and presenting at the board. 

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 5, 2020 | 01:54 p.m.

    Thank you for the questions! We are in the first year of the project and just starting to figure out the answers, while thinking a lot about possibilities like those you mention. By the end of the project we hope to be at that point of informing math teaching and learning, but at the moment we're tackling technical and definitional issues that will highly influence the types of conclusions we can draw. For example, defining what constitutes a unit of analysis (especially in terms of time) for the computational approaches we're developing will likely impact how well we can align to and expand on qualitative analyses.

    One thing we're planning to analyze with respect to body position is the classroom activity type, as well as the transitions between activity types over time. Students in our data work individually, in groups, interact with the teacher, and do other activities that might be related to both body position and movement (like gestures). That might align closely to what you're curious about with students up and moving around, which we're very curious about as well!

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Steven Greenstein
  • Icon for: Ed Robeck

    Ed Robeck

    Facilitator
    May 5, 2020 | 06:42 p.m.

    This has great potential. I've done a lot of observing in science classrooms--both informally and using observation protocols. It is very difficult to attend to all that is going on--I see a group having an interesting exchange with the teacher, and I miss another group discussing a question the teacher just left them with. Do you see this approach as being a way to more holistically/comprehensively characterize the classroom discourse? 

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 5, 2020 | 06:54 p.m.

    Certainly! Our approach will complement live observation by avoiding this problem of missing things due to divided attention, though not necessarily providing the same kinds of insights that an observer might. We are also interested in advancing retrospective analysis of videos, even without live observers. Sometimes these datasets are so large that annotation is intractable, while automated analyses can perhaps find patterns across hundreds of hours of video/audio quickly, and direct annotators' attention to the most interesting areas of a large dataset.

  • Icon for: Ed Robeck

    Ed Robeck

    Facilitator
    May 5, 2020 | 06:44 p.m.

    The "body language" analysis (sorry if that is a naïve term for such a sophisticated technique) really fascinates me. Is there any analysis of the verbal interaction taking place along with the physical analysis?

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 5, 2020 | 07:01 p.m.

    We will be analyzing verbal interaction in a manner of speaking, though we don't anticipate it will be practical to automatically transcribe the text of what students are saying. Classroom audio is quite messy, in that there are often multiple speakers, imperfect audio data, and lots of other noises. However, we are analyzing aspects of speech that do not necessarily require transcription. For example, we are attempting to detect silence, voice activity of any kind, overlapping speech with multiple speakers, and then aligning those kinds of events with events in the video data. The audio and video can complement each other by providing context (e.g., how connected are speech and gestures in different topics or classroom activities?).

  • Icon for: Ed Robeck

    Ed Robeck

    Facilitator
    May 6, 2020 | 08:28 a.m.

    Thanks for the reply, Nigel. It really seems like you're working with an innovative new approach. There will surely be a lot of strands of data to bring together in the long run as you sort out the types and quality of inteactions, and how movements correlate to those. I wondered, does the system track the teacher/adults automatically, or is there some sort of researcher process for keeping them marked? I can imagine with all the movement it could be challenging to automate that tracking.

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 6, 2020 | 08:52 a.m.

    You're quite right; it is challenging! That is one of the technical problems we're working on, because currently there is no automatic distinction between teachers and students, nor a way to track the identity of a person (either teacher or student) over time. However, we think it is at least possible to somewhat solve this problem by measuring the amount and type of movement of individuals to see if there are clear distinctions between teachers and students.

  • Icon for: Alison Mercier

    Alison Mercier

    Researcher
    May 6, 2020 | 09:56 a.m.

    I find this fascinating! Thank you for sharing. My mind quickly goes to the ways that this might be able to inform on group dynamics, interaction patterns, and positioning within productive, collaborative group work amongst students. I'm eager to see where your project and research goes.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Co-Presenter
    May 6, 2020 | 10:02 a.m.

    Thanks! We're really excited to see where the project goes as well. We think that it could be a really good tool for education researchers to help them analyze more data and see additional patterns. 

  • Icon for: Eric Hamilton

    Eric Hamilton

    Higher Ed Faculty
    May 6, 2020 | 12:07 p.m.

    This is so interesting.  Your emphasis on processing large datasets, in this case from visualization and video, is one that will become increasingly prominent.  We are applying epistemic network analysis, a tool of quantitative ethnography, to try to organize and sift through large sets of discourse data in our project, also on this site.  I am trying to think through what it would be or might be like to apply the techniques you are trying for our students as they collaborate.  A lot of our interactions are over zoom, even pre-pandemic, but this might be something that we should explore.  If you are able, let us know if the types of collaboration you might see in zoom (or in our project, where some are in zoom and some not) might have applicability.  You have a very cool project. :)

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Co-Presenter
    May 6, 2020 | 12:39 p.m.

    Thanks Eric! Your project looks really interesting too. I'm very familiar with ENA and am interested to see how you will apply in your project.

    I think collaboration looks a lot different via remote tools versus in person and it would be a really interesting extension of the work on our project to think about what would need to change in our analysis techniques to take that into account. 

  • Icon for: David Campbelll

    David Campbelll

    Facilitator
    May 6, 2020 | 12:38 p.m.

    This is very cool.  I’ll bet your program would easily identify the gorilla in that basketball-counting video! Assuming this works beyond your expectations, what would be some specific research questions your computer program could address

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Co-Presenter
    May 6, 2020 | 12:42 p.m.

    Thanks! Yes, I'm guessing we would be able to detect the gorilla. :)

    Our project is set up so that these tools and methods could be used by other researchers to investigate the same types of research questions that they would typically look at when analyzing video data qualitatively. This process of analysis usually takes a very long time and is very laborious and our hope is that our tools and methods would enable researchers to investigate a larger amount of classroom video data with the same types of lenses they normally would. It might also enable them to find additional patterns in their data that wouldn't have been accessible to them with the smaller amount they would typically analyze.

  • Icon for: Omar Ashour

    Omar Ashour

    Higher Ed Faculty
    May 7, 2020 | 01:32 a.m.

    Nice work!

    Do you imagine your algorithm/program being able to provide real-time feedback to instructors in classrooms as a way to improve their teaching?

  • Icon for: Elizabeth Dyer

    Elizabeth Dyer

    Co-Presenter
    May 7, 2020 | 01:12 p.m.

    Thanks! We're encouraged to see that people could potentially see our work as useful in a variety of ways.

    In our project, we are narrowly focused on developing tools for researchers, which will result in specific tools that may not be useful pedagogically. So tools that could be used for feedback to teachers (either in real time or offline) are not something we will develop in the project. We have recognized that since our tools will be publicly available and future work could be done to building on them to create pedagogical tools, we want to carefully consider the potential ethical issues beyond researchers.

    In my research, I focus a lot on teacher learning. I think there could be potential in the tools being a foundation for similar tools specifically designed to support teacher learning, but extremely careful design work would be necessary to create tools that empower and support teachers in their own sensemaking rather than something that acts as surveillance or evaluation of teachers and students. We truly believe that the power in our tools comes from integrating human and computer analysis, and I think the same principle would apply to teacher learning. So it is less about tools themselves being able to provide feedback, and instead, supporting teachers to make sense of information coming from both computer and human analysis to develop feedback. For example, maybe tools could be developed that help a teacher keep track of how much time they spend interacting with each group in the classroom and what types of interactions they have. I know teachers who are curious about this and find it tedious to track. I wouldn't necessarily consider this feedback, but instead, data or information. The teacher could use that information, which could help them identify a specific interaction with a group they want to analyze more deeply, perhaps with colleagues or an instructional coach. I think this approach aligns well with other projects that are carefully designing tools for teachers around classroom data, and they've found it incredibly important to design ways to support teacher sensemaking, not just the feedback or data given to teachers. Two that come to mind are: Practical Measures, Routines, and Representations (https://www.pmr2.org/); and the EQUIP classroom observation tool (https://www.equip.ninja/).

    I'm curious what kind of feedback or information you think would be useful for instructors to have that is difficult to get with existing tools?

  • May 7, 2020 | 01:47 p.m.

    I love the focus on computer-augmented methods for understanding classroom interaction and learning. The tool seems like it's working really well. What are some of the challenges you're seeing in the computer vision techniques?

  • Icon for: Joshua Rosenberg

    Joshua Rosenberg

    Co-Presenter
    May 7, 2020 | 02:38 p.m.

    Hi Michael, and thank you for your message.

    Our expert on computer vision is Nigel Bosch. I know from some recent updates that identifying students who did not consent to participate is one practical/ethical challenge that we are working to address. Another concerns how to discern between the teacher and students when, for example, the teacher is behaving more "like a student" (such as when providing feedback in a small group) or when a student is behaving "like a teacher" (like when presenting to class!). Some other challenges are more organizational. My understanding is that the OpenPose tool generates a ton of data. But Nigel and his team are certainly up to that task. I will reach out to Nigel (or invite him to share more) for any additional details.

    As an aside, I loved your video on music and coding - I'm excited to learn more about the project.

    Thank you again for posting,

    Josh (on behalf of the TCA^2 team)

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 7, 2020 | 03:12 p.m.

    To reiterate and agree with what Josh said, the most challenging part of the computer vision methods so far is figuring out who is who. That applies both over time (which person in one frame of video corresponds to a person in the previous frame?) and within each moment (who is a teacher, student, or person who we should not analyze?). Occasional tracking problems, or people moving in and out of frame, or behind other people, makes these all very tricky!

  • Icon for: Christina Krist

    Christina Krist

    Lead Presenter
    Assistant professor
    May 7, 2020 | 03:47 p.m.

    (Just popping in to say hi Mike! :))

  • May 7, 2020 | 03:00 p.m.

    This is a fascinating project!  I know you're in the early stages of your work and may not have looked at these issues yet, but how are you thinking about analysis in terms of cultural, racial, and/or gender dynamics?  I wonder whether, for example, body positioning might look different between a student and teacher of the same gender vs different gender.  I imagine some of that analysis will fall to the human researchers; are you thinking about what kind of training you might want to give them for using your tool?  Again, I know you're early on, so I'm just curious about where your thinking is (not on definite answers) :)

  • Icon for: Joshua Rosenberg

    Joshua Rosenberg

    Co-Presenter
    May 7, 2020 | 05:48 p.m.

    Dear Isabel, thank you for asking this - cultural, racial, and gender dynamics are especially important when it comes to computational methods (for us and others carrying out similar kinds of research). We have started to think about how to incorporate these factors our project in at least two ways. First, we've made a commitment to not shy away from conversations about race, ethnicity, and gender - our own and that of the teachers and students who appear in the video we're using - and we're just beginning to have these conversations as we begin to analyze the data. Second, and related to the part of your question regarding the human researchers, the human-based analysis can provide chances for us to bring our priorities and goals to the (qualitative) analyses of these videos. But, that also means that we can bring biases to these analyses (and conversations about them), too, and so I think offering training about (or, at minimum, structured opportunities to discuss) race, ethnicity, and gender and how they apply to this work is a really great idea. 

    Thank you for asking this. I for one would welcome hearing about any suggestions about what kinds of training would potentially be the most impactful - especially at this early stage in this work.

    Thank you again!

     

  • May 11, 2020 | 02:30 p.m.

    Thanks so much for your thoughtful response! This all makes complete sense. This is definitely not my area of expertise (just something I'm interested in), so I'm not sure I have any wisdom to share.  I agree, though, that training would be a great opportunity to get people talking about how bias can be transferred to technology, and how to be aware of that.  I'm excited about the work you're doing and your care on all of this.

  • Icon for: Raffaella Borasi

    Raffaella Borasi

    Higher Ed Faculty
    May 7, 2020 | 07:54 p.m.

    I also have used classroom videos as valuable data sources to help determine the effectiveness of professional development on teachers' practices - and found it very difficult to pay attention systematically to all that is going on. I see a lot of potential in your tools to help researchers in teacher education evaluate their interventions.

    I hope we will be able to use some of your tools to support STEM teacher education in the near future! 

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Co-Presenter
    May 8, 2020 | 09:37 a.m.

    Hi Raffaella! Thanks for your comment. At this stage of the project, we are primarily focused on producing tools that will help researchers. I think it would be great to figure out ways to eventually use this to support STEM teacher education as well. 

  • Icon for: Ed Robeck

    Ed Robeck

    Facilitator
    May 8, 2020 | 08:58 a.m.

    This research is fascinating to me. Since I first saw your video I've been on a dozen or more on-camera teleconference calls. In one of them (and then others) my mind wandered to what your system could do with facial, body, and eye movements during teleconferences. As more instruction goes online, I wonder how your system could help inform effective online practice with respect to synchronous class teleconference sessions. This is outside of what you're doing, of course, but it is an example of how generative your work is in terms of other applications.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Co-Presenter
    May 8, 2020 | 09:39 a.m.

    Hi Ed, thanks so much for your comment. I agree that there are a lot of potential applications for this work and similar types of approaches. We really hope that this is just the beginning of a line of work that can have positive implications for many areas of educational research. 

  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Researcher
    May 8, 2020 | 01:59 p.m.

    Hi all, it was great to check in with your work via the video. Such challenging, innovative and impressive work! Go cyberlearning!

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Co-Presenter
    May 8, 2020 | 02:00 p.m.

    Thanks Jeremy!

  • Icon for: Jennifer Borland

    Jennifer Borland

    Researcher
    May 12, 2020 | 08:27 a.m.

    This is a really interesting idea - with some pretty profound implications (potentially down the road) for teacher evaluation and pre-service teacher training.  As I was watching, it seems like camera placement is an important element in how the computer will ultimately code a teacher or student's position. Does the computer account for camera position and is it able to read the same stance (e.g., a teacher leaning over a student to help them) from different viewing angles?  I'm also curious to learn if you have (or could) look at student's positions as indicators of attention and engagement. Best of luck with your ongoing efforts on this project!

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 12, 2020 | 10:29 a.m.

    Thanks for your comments! Camera placement is indeed a super important element. We've just started comparisons between angles, so there are still a lot of unknowns. However, preliminary results are very encouraging. We're able to detect the position and pose of students/teachers with quite similar success rates across two camera angles in our tests so far. The biggest question is whether our findings will change notably with different camera angles, which is something we're planning to test.

    I think we could look at student position as an indicator of attention and engagement. Similar projects have done so with some success. However, I don't think that will be a high priority for this project because we're more focused on starting with unsupervised discovery, where we find themes emergent from the data, which may or may not include student attention/engagement.

  • May 12, 2020 | 01:40 p.m.

    This has the promise of a great tool that could provide invaluable information to dashboards for teachers to process in real-time classrooms.  In our work with the James S McDonnell Foundation Teachers As Learners program, we have a research team led by Prof. Ogan at Carnegie Mellon who is also working on automated detection in classrooms.  Perhaps your technologies complement one another.  If you are interested, please reach out to her and her lab: https://www.amyogan.com/


  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    May 12, 2020 | 01:58 p.m.

    Hi Michael,

    There is indeed some strong similarities between our lines of work. In fact, I think both projects use the same underlying computer vision technology, just for different purposes. It will be interesting to see how much our results align by the end of project as well.

  • Further posting is closed as the showcase has ended.