For the past couple weeks, I have been struggling with how
computational thinking can be incorporated into a classroom’s curriculum. And
once it has been successfully integrated, like Grover & Pea acknowledged,
“What can we expect children to know or do better once they’ve been
participating in a curriculum designed to develop CT?” With the examples from
this week’s readings, I have started to recognize the integration of
computational thinking (into fields outside of computer science) simply as a
tool, not necessarily as something that provides content and needs to be
assessed in the traditional sense.
After clarifying the different domains of computational
thinking, Weintrop describes three examples that fit within these domains. The
first model involves creating a video game that investigates the laws of
physics. Prior to working on this game, students must still have heard about
position, velocity, and acceleration (and other physics concepts). The game
simply provides them with a place where they can further understand these
concepts by manipulating variables. The second encourages students to
acknowledge the need for computers in sequencing the human genome. In this
example, students come to the model understanding the genetic code and how DNA
works; the model does not explain this for them. Rather, students have the
chance to delve deeper into the complexity of DNA using computational thinking.
Finally, Weintrop describes an interactive simulation on chemistry gas laws.
Students run experiments, visualize their data, etc through a simulated
environment. This, to me, seems no different (and not much more content is
learned) than actually doing the experiment in a chemistry lab, except that
this is on a computer and uses computational thinking. In all of these
examples, computational thinking is just used as a method for understanding the
content more deeply.
Even after designing the Scratch lesson plan with my fellow
science-minded classmates, I find that what we’re doing is not necessarily
teaching them about the immune system. But rather, we are providing them with a
tool that will help learning about the immune system become more syntonic with
their prior understanding. Programming can empower the concept, provide
students with challenges to overcome, and excite them to learn more. It,
however, must still be supplemented with a lesson plan or other in-depth
material on how the immune system works. Programming just gives them the chance
to apply what they learn to something that they care about.
So if it’s a tool for understanding content, does it need to
constantly be assessed? Can we just integrate it at a young age and allow
students to build on their computational thinking skills as they progress
through grade school? I am assessed in my reading ability every week when I
come to class to try to talk about the week’s readings. I am assessed in my
writing ability when I try to submit a paper to a research journal. Is it
enough to assess computational thinking by periodically giving students a
program and asking them to change some part of it, like Kaput, Noss, and Hoyles’
4-8 year old kids did in changing the spaceship to Pikachu?
You make an interesting point. I think what you're saying is, "If we are using X to more fully understand Y, and we can assess how well the students are understanding Y, can we not just assume that X is being covered in the process?"
ReplyDeleteIf that's true, I submit that computational thinking (Or at least my fledgling understanding of it) is more than JUST a tool. I think the authors are trying to describe and list its utility for cross-domain application IOT gather buy-in from educators in other fields. Computational thinking by its name is a specific way of thinking, one that has many benefits if we are to believe the authors of the last few weeks' readings. And since we are trying not only to teach new content (the stated purposes of the various classes: science, math, etc.) but develop a new way of thinking, I do believe that it is necessary to assess indicators of the X. In the long run, the specific content areas may even be less important to the students’ success than their ability to think computationally. What a shame it would be, to neglect assessing that skill, for the sake of one or two more questions about covalent bonds, or solving one more math problem.
I agree with Keith - very interesting point.
ReplyDeleteThis is a bit of a stretch, but it reminds me of the discussion on constructivism and how it is a pedagogical concept - a theory on how to teach, not how to learn. Perhaps computational thinking or the goal of utilizing computational thinking skills plays a larger role in the development of curriculum modules (children will be empowered in different ways, but we can offer the computational thinking tools or ideas needed along the way).
I also lean more towards hoping there is a way to assess whether students are correctly utilizing computational thinking skills. I have not used this before, but could it be analyzed similar to standards-based grading?
Jennifer I think you raise an interesting point about our need to assess everything that students are learning and not simply using it as an additional tool in the classroom to help students learn the information. However, I wonder if by keeping it a tool, how often it will be used, how skills and computational thinking will progress and develop over time, and if providing minimal instruction and guidance in the tool will limit the advanced understanding and skills to privileged students with access to internet and computers?
ReplyDeleteNicole, I like your idea of using standards based grading but wonder if we limit programming to individual skills how this can increase student's depth and understanding of the complexities of computational thinking. Since computational thinking was added as a practice of NGSS in the first dimension should it be developed with its own curriculum similar to math and language arts in CCSS? How do we assess skills that may take additional time to demonstrate- divergent thinking and logic? Are "deconstruction, reverse engineering, and debugging" the only ways to analyze and evaluate computational thinking?