Thursday 2016-10-06

Another problem in higher level computational thinking is realizing when Computational Thinking isn't the way to proceed.

For example, say we were talking about Shakespeare. One question we could ask is "Pick a TV show (e.g. Gilmore Girls); what elements, if any, could we say are Shakespearean?"

Another question could be: "Shakespeare's heroes talk differently than the villains; in what ways do they differ?"

The first question seems difficult to approach from a computational thinking perspective; one needs to compare Shakespeare's works to the corpus of all literature prior to ~1600 CE and reduce/isolate what is "Shakespearean", then one needs to reduce a TV show to its elements, and compare it to the elements we determined as Shakespearean. What are "elements"? How do we make a big list of them for our comparisons?

The second question seems much more tractable. We know we need to control for socio-economic differences in characters, however Shakespeare helps us out here, because some of his characters change. Which of his characters change a lot from beginning to end? Othello? So let's get a sample of Othello's speech in the first act, and compare it to a sample from the fifth. What looks different? What sounds different? Translate the samples from words to meter. How many meter form violations are there?

Using CT, the first question expands in scope rapidly; so in order to proceed, we need to find a way to reduce the work, or we need to switch to another paradigm.

The second question proceeds relatively directly. While we could have looked at the frequency of words, which is do-able with a computer and digital copies of the texts, the approach above is computable by hand.

#computationalthinking