1. David Touretzky
  2. https://www.cs.cmu.edu/~dst
  3. Research Professor
  4. Developing K-12 Education Guidelines for Artificial Intelligence
  5. http://AI4K12.org
  6. Carnegie Mellon University
  1. Christina Gardner-McCune
  2. Assistant Professor
  3. Developing K-12 Education Guidelines for Artificial Intelligence
  4. http://AI4K12.org
  5. University of Florida
  1. Deborah Seehorn
  2. NC ECEP State Lead
  3. Developing K-12 Education Guidelines for Artificial Intelligence
  4. http://AI4K12.org
Public
Choice
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: D. Matthew Boyer

    D. Matthew Boyer

    Researcher
    May 4, 2020 | 01:36 p.m.

    This is incredible work. Thanks so much for your presentation, Christina.

    As your project develops, I'll be eager to hear how you think we can best explore issues of algorithmic bias in K-12. Is anyone from an organization like Algorithmic Justice League involved in your symposium (yet)?

     
    2
    Discussion is closed. Upvoting is no longer available

    Carol Fletcher
    David Touretzky
  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 4, 2020 | 01:54 p.m.

    Thanks for your comments, Matthew. We are aware of the Algorithmic Justice League and admire their work.

    Our working group includes Cynthia Breazeal of the MIT Media Lab, whose doctoral student, Blakely Payne, has developed an excellent AI+Ethics curriculum for middle school. Algorithmic bias is one of the topics she considers. We have a link to her curriculum in our Resources Directory at http://AI4K12.org. You can also find it at https://aieducation.mit.edu/

    Another expert on socetial implications of bias in machine learning, Wells Santo, is a member of our Advisory Board. And we have several teachers in our working group who are particularly interested in this issue. It is one of the topics we cover in Big Ideas #3 (learning) and #5 (societal impacts).

    What are your own thoughts on how algorithmic bias should be taught in K-12?

     
    3
    Discussion is closed. Upvoting is no longer available

    Michael I. Swart
    Carol Fletcher
    D. Matthew Boyer
  • Icon for: D. Matthew Boyer

    D. Matthew Boyer

    Researcher
    May 4, 2020 | 02:24 p.m.

    Thank you for the additional resources, David. I'm sure you don't need me to say, but this is such essential work that I'm enthused to see the exceptional group you have created.

    For K12, I've only scratched the surface of AI-related issues when I taught preservice Teacher Ed students in their requisite edtech course, but I think one of the core issues for them to explore is opacity in AI. What's happening "behind the scenes" as well as what are they able to control/change. I use algorithmic bias as a way into the topic, but many become most engaged when talking about what Siri, Alexa, or Google Assistant is doing. From your project, helping learners have a better understanding of how the AI is actually making driving decisions is the kind of experience I would hope learners would have when exploring issues of opacity.

    Thank you again for sharing this work.

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 5, 2020 | 01:52 p.m.

    I agree completely about the need for "glass box" demos as opposed to "black box" demos. Glass box demos are much harder to construct, but they do a lot more to facilitate learning. A great example is Google's TensorFlow Playground, which lets students see inside a neural network as it learns.

    We are trying to build a community of developers to create more resources like this. We'll be chairing a special track at EAAI-21 (Educational Advances in Artificial Intelligence) to encourage more work in this area.

     
    2
    Discussion is closed. Upvoting is no longer available

    Carol Fletcher
    D. Matthew Boyer
  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 4, 2020 | 01:46 p.m.

    Welcome to the AI4K12 Initiative STEM Video Showcase Page! Thank you for taking a few moments from your busy schedule to view this video. 

    We are very grateful to our AI4K12 Working Group for the time and expertise that they have contributed to our initiative during the past two years.  We have not quite completed our work, but we have made tremendous progress. 

    This video represents just a snapshot of the AI4K12 Initiative. We are excited to engage with you all this week and answer any questions you may have about our project, but we are also interested in learning from YOU: 

    • If you are thinking about teaching K-12 students about AI, what do you want to know or have access to?

    • Tell us how you are engaging students in learning AI.  Examples please!

     Check us out on Twitter: #AI4K12 and @AI4K12

    Visit us at http://AI4K12.org

    Join our mailing list: https://aaai.org/Organization/mailing-lists.php

    Download our free poster on the Five Big Ideas in Artificial Intelligence. (Versions are also available in Chinese, Korean, Spanish, Hebrew, and Turkish; see our web site for links.)

    And before you leave the showcase, be sure to vote for your favorite video (even if it isn’t ours)!

     
    2
    Discussion is closed. Upvoting is no longer available

    Carol Fletcher
    D. Matthew Boyer
  • Icon for: Christina Gardner-McCune

    Christina Gardner-McCune

    Co-Presenter
    Assistant Professor
    May 4, 2020 | 02:23 p.m.

     Hi Matthew!!!!!

    Thanks for stopping by to chat about our video.

    I know Joy from when she was an undergrad at GT. It has been awesome seeing her research and advocacy develop. Like Dave said, we admire their work. We are drawing from a lot of the work from the ACM FACT community by way of introduction from Wells Santo.

    I look forward to catching up soon.

    - Christina 

     
    1
    Discussion is closed. Upvoting is no longer available

    D. Matthew Boyer
  • Icon for: D. Matthew Boyer

    D. Matthew Boyer

    Researcher
    May 4, 2020 | 02:31 p.m.

    I should've made that GT connection, but I'm so glad to be reminded. Her work was the hook that got me interested.

    I'm excited to explore these resources and learn more about your work. As always, you are amazing!

  • Icon for: Alan Peterfreund

    Alan Peterfreund

    Facilitator
    Executive Director
    May 4, 2020 | 06:15 p.m.

     Impressive effort and a terrific team. Having seen the challenges of getting CS accepted into K-12, it is great to see the strong integration with CSTA and other CS Ed folks. With Cybersecurity and Data Science also competing for space in the K-12 curricula, how do K-12 education leaders at the state and district levels become informed stakeholders in this process?

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 4, 2020 | 10:26 p.m.

    Thanks for your question, Alan. We agree that there is a huge need to educate teachers and administrators about the basics of AI; it's one of the areas of computer science where they have the least background. Many of the teachers and administrators we talk to start out by asking us what is the definition of "AI"? (We tell them that AI is a subfield of computer science concerned with techniques that allow computers to do things that, when people do them, are considered evidence of intelligence.)

    We've done a bunch of teacher PD events at conferences over the past year, including AAAI, SIGCSE, ISTE, and CSTA. And we're doing more. That's one way to help people get up to speed. Two other suggestions:

    There is some overlap between data science and machine learning. And machine learning is a branch of AI (and is covered in our Big Idea #3). So people who are interested in teaching data science might naturally be recruited to teach some AI as well.

  • Icon for: Maureen Biggers

    Maureen Biggers

    Higher Ed Administrator
    May 7, 2020 | 02:02 p.m.

    Hi Dave. What a great resource and project! I am helping to coordinate an AI Bootcamp for nontechnical faculty and students at Indiana University for sometime next academic year and will review your resource directory and be in touch with you. Great team you have too!

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 7, 2020 | 04:54 p.m.

    Great to hear from you, Maureen! I'd love to help out with your AI Bootcamp planning.

    There are many great resources listed in our resource directory. I recently released a cloud-based version of my Calypso framework that allows novices of all ages to experiment with AI technologies such as speech recognition and generation, pattern matching, and path planning. It runs in the browser, and you can try it for free at https://calypso-robotics.com/

    We have free curriculum at https://www.cs.cmu.edu/~dst/Calypso/Curriculum/

  • Icon for: Patrick Honner

    Patrick Honner

    Facilitator
    Teacher
    May 5, 2020 | 09:19 a.m.

    This is an exciting project. I appreciate the focus on demystifying AI for students, as well as the explicit acknowledgement of the civic and policy consequences that all new technologies entail.

    It seems like one aspect of demystifying AI must be making the mathematics transparent. How do you plan on collaborating with mathematics teachers in this process? And to what extent, if any, do you see mathematics as the channel into classrooms for your work?

     
    1
    Discussion is closed. Upvoting is no longer available

    D. Matthew Boyer
  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 5, 2020 | 01:22 p.m.

    Thanks for your question, Patrick.

    Since AI is a branch of Computer Science, I think we face the same issues regarding math as other areas of CS. On the one hand, mathematical concepts are foundational to CS. On the other hand, we want to make the material accessible to as broad a range of students as possible, including those with weak math backgrounds and those who are disinclined toward mathematical thinking but might still find other aspects of computer science attractive.

    Probability and statistics are two topics that are seen as increasingly important for K-12 students. In AI, they form the basis of probabilistic reasoning (part of our Big Idea #2). They're also important in data science, which is related to machine learning (Big Idea #3). In addition, knowledge representation (another part of Big Idea #2) relies on discrete mathematical structures such as trees and graphs, and on propositional and predicate logic. Some of these topics can be introduced at a very early age. For example, one popular exercise is to create a decision tree to guess which animal a person is thinking of based on a series of questions such as "does it fly". You can do this with kids in K-2 if the teacher scaffolds the activity. Drawing the decision tree on the board illustrates knowledge representation. Following a procedure to grow the tree by adding a new animal distinguished by a new question illustrates the notion of machine learning. And talking about the tree itself (terminal nodes, parent nodes, depth, branching factor) introduces the basics of graph theory.

    At the upper grade levels, neural networks involve linear algebra, and the backpropagation learning algorithm that drives most neural net applications requires vector calculus. So the connection to mathematics is clear, but one can still explore neural nets in interesting ways even without the mathematical skills to derive the underlying algorithms.

    It's unlikely that elementary schools have room in their curriculum for an AI elective, and this might be a stretch for many middle schools as well. So we will have to find ways to facilitate teachers in other subject areas introducing AI concepts into their classrooms. Math and science teachers are obvious allies here, but we think there are interesting ways to involve language arts and social studies teachers as well. For example, you can use Google's Speech Recognition API to explore topics such as homophones and use of context for disambiguation. See this page for exercises that do this: Speech Recognition in Chrome.

    We'd love to hear your own thoughts on how we should be working with mathematics teachers to help students appreciate the mathematical underpinnings of AI, or use AI to get students excited about math.

     
    3
    Discussion is closed. Upvoting is no longer available

    patrick honner
    Yang Cheng
    D. Matthew Boyer
  • Icon for: Patrick Honner

    Patrick Honner

    Facilitator
    Teacher
    May 6, 2020 | 08:58 a.m.

    That's a good point about how these modern CS-driven projects (AI, Data Science, etc) have the same issues with math. I can see the tension there: On the one hand you want to leverage the math instruction that's already happening, but on the other hand you don't want this to be seen as a math class (for a variety of reasons). Personally I'd like to see more integration of the two disciplines, but I get it.

    I love the elementary school example of classification. It made me laugh, as it put me in mind of a group of mathematicians on Twitter who are constantly arguing about whether or not things are sandwiches. Classification is a rich and complex problem, but as you point out, accessible to all.

    I've been fortunate to teach linear algebra and vector calculus at the HS level for many years, and would love more curriculum-adjacent resources to help bring concrete ideas from AI to students. But these courses are relatively rare in the HS sequence. In my comment I was thinking more about things like conditional probability (playing an increasingly large role in Algebra 2 courses) and regression (a big part of AP Statistics, which is quickly becoming a HS staple). There must be some opportunities to engage with students (and teachers!) in those units.

     

     

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 6, 2020 | 11:31 a.m.

    Yes, conditional probability in AI shows up in a topic called Bayes nets, which are used in probabilistic reasoning. It would be cool to build a little interactive Bayes net simulator that students could play with while exploring this topic.

    As for regression: do they cover more than just linear regression in high school? Because training a neural net can be seen as a massively nonlinear, very high-dimensional regression problem. It would be good to make that connection for students.

  • Icon for: Carol Fletcher

    Carol Fletcher

    Researcher
    May 5, 2020 | 03:23 p.m.

    This is a great project and glad to see you are specifically developing resources for teachers that can be easily translated into the classroom. I look forward to learning more at the CSTA virtual conference and sharing this with our @WeTeach_CS network!

  • Icon for: Yang Cheng

    Yang Cheng

    May 5, 2020 | 04:25 p.m.

    This is an important initiative that we're happy to have supported. As part of my company ReadyAI's effort to also promote AI education in K-12, we have developed a free 50-minute course on the Five Big Ideas called AI+ME.

    AI+ME walks students through everyday examples that they relate to, such as recognizing facial expressions in their peers, to teach them about perception, for example. It's been used at schools as students' first introduction to AI formally. We've found that the Five Big Ideas really consolidate the broad field of AI into digestible "chunks" for teachers so they are better able to understand what AI is and how to teach it in classrooms. 

    Thanks to Dave and Christina and everyone involved for their continued contribution to AI in K-12!

    The link to the AI+ME course is: https://edu.readyai.org/courses/aime/

     
    3
    Discussion is closed. Upvoting is no longer available

    Michael I. Swart
    Carol Fletcher
    D. Matthew Boyer
  • Icon for: Renee Fall

    Renee Fall

    Researcher
    May 5, 2020 | 05:43 p.m.

    Such an important effort to make learning about AI accessible to students and teachers, thank you.  The 5 big ideas are well communicated, and I appreciate that social impacts are at the center in your visualization. I hope AI4K12 will continue and perhaps expand to serve as a network that connects AI education with broader CS and STEM education efforts.

  • Icon for: Deborah Seehorn

    Deborah Seehorn

    Co-Presenter
    NC ECEP State Lead
    May 6, 2020 | 04:33 p.m.

    I'm so happy that you noticed that Social Impacts was at the center of our 5 Big Idea visual! That was intentional as Big Idea 5 impacts and intersects with the other four.

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Informal Educator
    May 6, 2020 | 04:38 p.m.

    Has AI4K12 developed any instruments to evaluate K-12 students' knowledge of or attitudes toward AI? Or do you have any recommended citations of studies that have looked at K-12 students' understanding of AI, particularly in terms of its societal impacts?

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 6, 2020 | 04:55 p.m.

    Great question. Cynthia Breazeal's group at the MIT Media Lab has looked at the attitudes of children toward intelligent agents like Alexa, and toward interactive kid-friendly robots. (Cynthia is a member of the AI4K12 working group.) And Blakely Payne's AI+Ethics curriculum invites students to examine societal impacts of AI. But I don't know of any formal assessment instruments. Maybe some other people reading this discussion can contribute some references.

    While kids' knowledge of AI is our primary focus, we are also very concerned about teachers' knowledge of AI. There is a huge need for PD in this area. Many teachers don't have a good definition of AI, or don't realize that AI is a branch of computer science. (And some falsely believe that AI is the same as Machine Learning.) I think a formal assessment of K-12 STEM teachers' knowledge of AI would confirm the great need for PD resources. Fortunately, many providers are working on that, including ISTE, Microsoft, and ReadyAI.

  • Icon for: Nathan Auck

    Nathan Auck

    Facilitator
    STEM Coordinator
    May 6, 2020 | 06:44 p.m.

    I too am an advocate for making the AI conversation more visible in the K-12 curriculum, while finding integrated opportunities with other subjects (including but not limited to math and science, as mentioned above). My wonder is regarding AI’s application in education. Though I know there are many polarizing conversations that can develop related to student data privacy, I’m wondering what ways you see AI affecting classrooms in the near future? These would be great initiatives to engage students in, if AI was impacting their educational experience!

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 6, 2020 | 07:08 p.m.

    There are two camps of people concerned with "AI in K-12". The larger, more well-established group is the one focused on using AI technologies in the classroom, e.g., for intelligent tutoring systems. They are largely agnostic as to subject matter. The newer, smaller group, of which our project is a part, is concerned with teaching kids specifically about AI. These two groups have almost no overlap.

    Nonetheless, as you point out, when AI-powered systems are visible in the classroom, this will be yet another instance of how AI is present in children's daily lives, and they may have questions about how it works. But I hesitate to make predictions about when we will see more AI-powered tutoring systems in the schools. People have been working on this for decades. Progress is being made; there are plenty of success stories out there. An intelligent tutoring startup in China, Squirrel AI, has a billion dollar valuation and is now highly visible in the AI community. But the trajectory from small successes to widespread adoption in the US is not something I'm equipped to speculate about.

     
    1
    Discussion is closed. Upvoting is no longer available

    D. Matthew Boyer
  • Icon for: Carol Fletcher

    Carol Fletcher

    Researcher
    May 6, 2020 | 07:32 p.m.

    Yes, I remember the highly touted "School of One" program, now called "Teach to One"  - personalized learning for middle school math. I remember it being advertised when it first came out as like Pandora for math instruction.  Definitely mixed results http://blogs.edweek.org/edweek/DigitalEducation.... I think the lack of breakthrough progress in the intelligent tutoring world might be a testament to how truly difficult it is to be a master teacher capable of providing just the right kind of differentiated instruction and support for a wide variety of students (simultaneously to boot). Maybe the challenge of applying AI to instructional settings will make non-educators learn to appreciate K-12 teachers a bit more! It's an incredibly difficult job to do well day in and day out under the best of circumstances. 

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 7, 2020 | 02:58 a.m.

    Today we want to focus on our Big Idea #1: Perception. What should K-12 students understand about perception in AI systems? We've identified two essential understandings:

    • Perception is the extraction of meaning from sensory information using knowledge.

    Devices that perform only trivial processing of signals, and don't require knowledge to do so, are not engaging in perception. Thus, a supermarket door that opens when you step on a pressure pad or activate an ultrasonic sensor is not exhibiting perception. But a self-driving car that detects and obeys stop signs is definitely perceiving the world around it.

    • The transformation from signal to meaning takes place in stages, with increasingly abstract features and higher level knowledge applied at each stage.

    We call this the "abstraction pipeline". Sometimes it's referred to as the "feature hierarchy". For example, in computer vision, we start out with pixels and move to edges, contours and textures, boundaries, surfaces, objects, and finally scenes. Deep neural networks can learn feature hierarchies on their own, by exposure to millions of images. To see what the feature hierarchy of a deep neural network called GoogLeNet looks like, visit this link: https://distill.pub/2017/feature-visualization/appendix/

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 7, 2020 | 06:00 p.m.

    Big Idea #2 says: "Agents construct representations of the world and use them for reasoning." Representation and Reasoning are two closely-related concepts in AI. Why is that?

    • Representations drive reasoning.
    • Reasoning algorithms operate on representations.

    Many types of reasoning algorithms can be found in AI. Some of the major types are:

    1. Classification (sort pictures into those containing cats and those with no cats)
    2. Regression (predict home values based on age, square footage, and zip code)
    3. Clustering (find customers whose tastes are similar)
    4. Heuristic Search (find a good driving route from my house to the airport)
    5. Planning (what steps should I perform, in which order, to achieve a complex goal?)
    6. Optimization (what mix of products should my factory produce to maximize profit?)
    7. Theorem Proving (prove that this electronic circuit computes what it's supposed to every time)
    8. Inference (given the start of a story, predict what the characters are likely to do next)

    These reasoning algorithms use different representations, but at a high level, representations can be divided into two broad types:

    1. Symbolic representations use lists, trees, or graphs. They are designed by AI developers based on the task the developer wishes to use them for, and therefore tend to be understandable by humans.
    2. Feature vector representations use arrays of numbers. They are constructed by machine learning algorithms and are hard for humans to interpret, but can encode subtle statistical information that may be difficult to express symbolically.

    We'll learn more about feature vector representations when we explore Big Idea #3.

     
    1
    Discussion is closed. Upvoting is no longer available

    Michael I. Swart
  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 8, 2020 | 03:16 p.m.

    Big Idea #3 is that "Computers can learn from data." Machine learning has been responsible for many of the recent advances in AI technologies, such as speech recognition performance improving well enough to be commercially useful rather than an academic curiosity. Similarly, the best face recognition algorithms are now built using machine learning, and this is true for many other AI applications as well.

    But what is "learning"? People and computers don't learn the same way. Our definition, which applies to both people and computers:

    • Learning is a change in representation resulting in a change in behavior.

    Why is machine learning so useful for AI? Because programming algorithms for intelligent behavior is extremely difficult.

    • Machine learning allows a computer to change its behavior without being explicitly programmed.

    Our guidelines for Big Idea #3 focus on three main concepts:

    1. The nature of learning, including different types of learning algorithms.
    2. Neural network learning architectures, which are the dominant approach in machine learning today.
    3. Datasets used for learning, including the design of feature sets and the effects of biased data.

    Big idea #3 makes contact with all the other big ideas:

    • With big idea #1 because machine learning techniques are giving us better perception algorithms.
    • With big idea #2 because machine learning can be used to construct several types of reasoners (e.g., classifiers, predictors, search heuristics).
    • With big idea #4 because machine learning is used extensively in natural language understanding applications such as machine translation or auto-generated email replies.
    • With big idea #5 because bias in machine learning applications can have negative impacts on society.

    What do you think are some of the differences between the way people learn and the way computers learn?

  • Icon for: David Sittenfeld

    David Sittenfeld

    Informal Educator
    May 9, 2020 | 06:16 p.m.

    Thanks for sharing this fascinating work!  Can you talk about how you are identifying and framing the important societal and policy pieces you describe?  We've done a bit of work in this space (but we're not experts at all!) and I've really been impressed at the substantive and diverse values and thoughtful perspectives that people bring to considerations about the societal and ethical implications of artificial intelligence and its various applications.  Our colleagues at Arizona State University did some open-framing listening sessions to inform a global deliberation on autonomous mobility that we later produced together (https://themobilitydebate.net/). 

    I was consistently impressed by the depth and thoughtfulness everyday community members brought to this, and I'm sure you're getting a wide range of values and perspectives around all of the AI applications you're bringing up here.

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 9, 2020 | 06:44 p.m.

    Thanks for your question, and the link to the mobility debate web site.

    We are developing guidelines, not curriculum, so we're not diving deep into discussions of the impact of particular AI-based technologies -- at least not yet. The guidelines say that students should learn that technology can impact different groups in different ways. Self-driving cars are a good example: the same technology that offers mobility to non-drivers threatens unemployment for professional drivers. Other issues we draw attention to are the effects of bias in decision-making systems, and harmful uses of AI technologies, e.g., deep fakes or mass surveillance. But we also look at positive impacts of AI, such as the ability to provide new types of services, or to make existing services better (e.g., improved accuracy of medical diagnoses). There is a movement called "AI for Social Good" with its own literature and conferences, that tries to find ways to use AI to make positive impacts on society, such as helping with energy conservation or protecting endangered species.

    We'd love to hear more about your own work in this area. 

     
    1
    Discussion is closed. Upvoting is no longer available

    D. Matthew Boyer
  • Icon for: David Sittenfeld

    David Sittenfeld

    Informal Educator
    May 9, 2020 | 06:53 p.m.

    That's really interesting - thanks!  We are hoping to do engagement with youth about the societal and ethical dimensions of the NAE's Grand Engineering Challenges, of which AI is an important one. Totally agree that we want to get folks thinking about the tradeoffs, affordances, and uncertainties so we as a society will maximize the benefits of these emerging technologies for as many different communities as possible, while minimizing the potential for negative and/or unanticipated consequences. Good luck in the competition!

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 9, 2020 | 06:53 p.m.

    Big idea #4 is Natural Interaction: "Inteligent agents require many kinds of knowledge to interact naturally with humans." This covers a wide range of topics, including:

    • Natural language understanding
    • Emotion recognition, sentiment analysis, affective reasoning
    • Commonsense reasoning
    • Consciousness and theory of mind; artificial general intelligence

    Many of these topics are not yet well understood, so they can make for lively debates. For example, today we have no algorithms for artificial general intelligence (AGI), which would give computers much more flexible reasoning abilities than they have now. Is AGI even possible? What will it mean when AGI is finally achieved?

  • Icon for: Alan Peterfreund

    Alan Peterfreund

    Facilitator
    Executive Director
    May 10, 2020 | 10:36 a.m.

    David and team  - thanks for doing such an outstanding job responding to questions and promoting further dialogue with the posting of Big ideas.  Awesome stuff

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 12, 2020 | 07:52 p.m.

    Thanks for yoiur work as Facilitator, Alan. We enjoyed the discussions here.

  • Icon for: Denise Wright

    Denise Wright

    K-12 Teacher
    May 12, 2020 | 12:22 p.m.

    I look forward to teaching the Five Big Ideas in AI next year to my middle school classes. Enjoyed the Symposium. Excellent video and work. 

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 12, 2020 | 02:16 p.m.

    Thank you, Denise. If you want to make your own PowerPoints, we now have a set of five individual Big Ideas icons available for anyone to use:

    https://github.com/touretzkyds/ai4k12/tree/master/icons

    Also, if you make your own AI activities, please tell us about them. We are looking to compile a collection of activities by topic and grade band to help jump start AI education in K-12.

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 12, 2020 | 03:38 p.m.

    Big Idea #5 is Societal Impact: AI can have both positive and negative impacts on society.

    We break this down into two major areas. The first is social and economic impacts of AI technologies.In many cases intelligent agents won't completely replace humans, but will change the way they work by automating the more routine aspects of their job. But some job losses are inevitable. For example, autonomous driving systems that provide non-drivers with greater freedom and mobility will also put many professional drivers out of work. Widespread deployment of video surveillance and face recognition technology means criminals can no longer hide from the police, but also signals the loss of anonymity in public and threatens the freedoms of dissenters critical of authoritarian governments. Deepfakes offer new artistic possibilities but can undermine our notion of "evidence".

    The second major area is the ethical design of AI applications used to make decisions about people, such as who gets considered for a job, receives a loan, is admitted to college, or is released on parole. Such systems may not treat everyone equitably, even if the designers had no intent to discriminate. When machine learning is used to create these systems, a skewed training set may result in biased decision making. It is important for automated decision making systems to be transparent about the basis for their conclusions, to treat everyone "fairly" (difficult because there are multiple, mutually inconsistent definitions of fairness), and to be accountable to the public.

  • Icon for: Michael I. Swart

    Michael I. Swart

    Researcher
    May 12, 2020 | 04:19 p.m.

    As AI makes its way through our educational institutions, there is definite need for this work.  Thanks for sharing and for clearly outlining the objectives of the project, details about the curriculum an such. 

    What kind of data does this project produce? Of teachers usage? Student usage? Perspectives, motivations, interests? Learning outcomes? Plans for diversifying dissemination and integration into schools?

     

  • Icon for: David Touretzky

    David Touretzky

    Lead Presenter
    Research Professor
    May 12, 2020 | 05:22 p.m.

    I'd say the main piece of data our project produces is a view of what's appropriate and feasible to teach K-12 students about AI. We have a working group divided into 4 grade bands: K-2, 3-5, 6-8, and 9-12. Each grade band has 4-5 teachers with multiple years of experience teaching students in those grades. Most of them are actively teaching today; some have moved on to teacher coaching or curriculum development roles but also still work with kids. We've been in dialog with these grade band experts for nearly 2 years now. Most of them knew little about AI when they started, but several of them had previously worked on the CSTA Computing Standards. So they know what guidelines ought to look like in order to be useful to teachers in the field, which is something our AI subject matter experts (including me) did not have a feel for.

    As these teachers are currently introducing AI into their own classrooms, they are able to provide feedback on what activities work well and what their students are learning. But for many topics there simply aren't good demos or activities appropriate for K-12 students yet. We're hoping to change that by fostering a community of "AI for K-12" resource developers.

    The question of how to integrate AI into schools is complicated, as many schools are still struggling to implement basic computing education across all grade bands. So it's not going to happen overnight, and it will probably be highly dependent on individual teacher initiative for the next few years. We can expect more AI electives to show up in high school, but to get AI into elementary schools is going to require a sustained effort in teacher PD.

  • Icon for: Deborah Seehorn

    Deborah Seehorn

    Co-Presenter
    NC ECEP State Lead
    May 12, 2020 | 07:37 p.m.

    Thanks to all of you who visited, viewed our video, asked thoughtful questions, and made great suggestions!  Be sure to visit us at  http://AI4K12.org.