Tuesday, 26 June 2018

Modern Life is Rubbish

The 1993 film Demolition Man told us that, in the future, people would be listening to adverts for entertainment. I'm not a big fan of science fiction, so I dismissed this prediction as far-fetched nonsense.

Twenty three years later, when my daughter was in year 5, I decided to play her some of the singles that I'd bought when I was in year 5. The 80s had just started back in 1978, and while I was spinning those early Gary Numan singles I started to wonder how it is that while we were obsessed with pop music at that age, my daughter and her friends never mention it. She's now at secondary school and still never mentions it, and, whereas we waited for the new chart to be revealed every Tuesday lunchtime, I suspect that most of my teenage students would struggle to name me a number one single from the last year. 

So what's happened? It's a fairly recent thing. When I was doing my first teaching practice, back in 1997, I wondered why all of the students were walking the corridors singing "On My Radio" by The Selecter. I had yet to hear Aqua's "Barbie Girl", but they all knew it. The deputy head told us, as the PGCE students were welcomed to the school, that it was "every teacher's duty to watch Top of the Pops." 

I still have the first single I bought, and I clearly remember going to buy it – it was 70p and I bought it from the Midland Educational shop in Sutton Coldfield. That made me start to wonder whether technology is the thing that's changed. Saving up the money and going into town to buy a 7" single was quite an occasion for an eight-year old, and even now I get a sense of occasion as I place the record on the turntable. How many people remember the first video they watched on YouTube, or the first song they streamed from Spotify? 

And while interest in music has declined, Christmas adverts for Sainsbury's and John Lewis currently each have more than 30 million views on YouTube. Sandra Bullock was right. 

It’s not just the way that we listen to music that has changed, either - it’s the way that it’s recorded and reported. Having already ruined the charts by adding streaming data, the Official Charts Company decided that YouTube views will also be included

When we teach students about the impact of ICT, we usually think about the workplace, the marketplace or the environment, but what about the impact on leisure time and hobbies? Not only do I think that streaming is killing my love of music, but I'm also a keen amateur photographer and I'm not entirely happy about the impact of technology on photography, either. 

Digital photography does have many benefits – you don’t have to wait until you’ve taken 24 or 36 photos and then wait a fortnight for Bonus Print to send them back before you can look at your pictures, for example. And you can take as many as you like and delete the ones that don’t work. On the other hand, it’s also fundamentally changed the nature of photography. When I attend my local photographic club, or look at college courses or books, it seems that the most important aspect of photography is being able to use Lightroom or Photoshop to correct or improve photos that people didn’t take the time to compose or expose properly in the first place. 

Video is similar. When I bought my video camera I looked for a video course – I thought it would be good to know a bit more about long-shots, close-ups, establishing shots, L-cuts and J-cuts, etc. All I could find locally were books and courses on using software such as Adobe Premiere.

As a Computing teacher, I often feel that people expect me to behave in a certain way; they expect me to have the latest gadgets and to get excited by the potential of new developments such as the "internet of things", but is there a danger in mindlessly promoting the latest technology? Is it always an improvement over what came before? 

When describing the benefit of a particular piece of technology, we always tell students to avoid vague statements about cost and ease-of-use, but for home and leisure activities we're usually sold things in terms of convenience. 

Convenience for who, though? There used to be a common expression, “all mod cons”, which was an abbreviation of “all modern conveniences”, meaning things fitted in modern houses to make life easy or more comfortable. 

Are devices such as iPods and Kindles or more benefit to us, or to Apple and Amazon? I note that, despite there being no manufacturing, storage or delivery costs, it's still often cheaper to buy the CD of an album than to download it in a compressed format at lower quality – and downloading an uncompressed FLAC copy can cost nearly twice as much. Apple dropped the headphone socket from iPhones to make them a fraction thinner, but I’d find it more convenient to be able to use my headphones. 

Digital broadcasting might give us more choice on our radios and televisions, but it’s of more benefit to the broadcasters as signal strengths are lower and several programmes can be multiplexed together on the same channel, saving power and bandwidth. It also means greater compression and lower quality as broadcasters try to squeeze in more content. 

It also turns out that we quite like a bit of inconvenience – vinyl sales are at a twenty-year high, mechanical watches are still fashionable, people like writing with fountain pens, and who doesn't love a steam train? Inconvenience can be quite hard to come by, though – BMW engineers had to hack the cars' safety systems on the set of the last James Bond film to allow them to drift and perform other stunts that traction control, etc., would normally prevent. 

There are plenty of examples of things that are not really an improvement over what came before. Contentious sound quality aside, DAB lacks the traffic announcements of FM with RDS, or the ability to search on programme type, and robots can't flip burgers. Some schools use a CMS, such as Joomla or WordPress, to teach students to create web-pages without them learning anything about HTML. 

There are also applications of technology that create in us a form of learned helplessness. People navigate by GPS and no longer look where they're going, computers calculate scores when playing darts or bowling, removing the need to mental arithmetic, and we now even need traffic lights in the floor, apparently, so we don't need to look up from our smartphones to avoid getting run over. We can’t even be trusted to turn the engine when we stop our cars. 

Then there's planned obsolescence and "sunsetting". Digital televisions and set-top boxes have stopped working, Nest has disabled products costing hundreds of pounds, Skype stopped working on televisions, and Spotify dropped support for my network music player with a few days’ notice, despite it having a cult following. Support for mobile devices also tends to be time-limited – I’m typing this on a six-year old PC that I use for work on a daily basis, but my five-year old Android tablet is pretty-much unusable; not because there’s anything wrong with the hardware, but because the manufacturer no longer offers operating system updates. 

I've had my car for ten years, and the time has come to think about a replacement. It’s a major purchase and I want something that might last me another ten years, but I’m concerned about longevity. Lots of modern cars have systems that run Android Auto or Apple CarPlay, and have computer-controlled “entertainment systems” that mean it takes 30 seconds to turn on the radio. Even someone who works in the car industry couldn’t reassure me that all of this technology would still be working in five years. 

Now before you start thinking that I’ve turned into a grumpy old man… I know that there are many positive aspects to the use of technology, but we hear about those too often. Even the news channels tell us about the latest Apple products. 

When I first started teaching, back in the 90s, we never taught any presentation skills, but did more word processing. I used to point out that the use of ICT needed to be appropriate – we wouldn’t turn on the computer, load Windows, start Word, turn on the printer, load paper, type some text and print it if we just wanted to leave a note for the milkman or to let the family know that we’d popped out to the shops. 

In PSHE lessons we teach students not to eat too much sugar or fat, but I often think that we should also teach students to be more discerning in their consumption of information - schools should give students a copy of Ben Goldacre’s Bad Science, rather than a Gideon New Testament. I'd like to think that we can teach students to resist being wowed by new technology and ask themselves if the latest gadget is really of benefit to them, or whether it’s merely a solution in search of problem, just a gimmick or a cynical marketing tool.

Tuesday, 19 June 2018

The Computing Scapegoat

Ever since Computing replaced ICT in the National Curriculum there have been discussions about whether students will have the skills required by employers, or how they might acquire those skills. A recent study has been reported as suggesting that there is a “digital skills gap”, with fewer students getting “vital” skills.

That article appears to be a little confused; it starts off by saying that students lack the "digital" skills that employers want, but then most of the text is about the lack of diversity in students taking Computer Science as a GCSE.

For me, it also raises a number of further questions:

1. What are these "digital" skills that employers think are lacking in our students? There's no mention of anything specific.

2. ICT was compulsory in many schools, which may well explain why the uptake of Computer Science looks poor in comparison; do we know what percentage of students took GCSE ICT in schools where it was optional, for example? And do we know what proportion of students were girls in schools where ICT was optional? Some schools also entered students for two ICT qualifications at the same time; do the figures take that into account?

3. More fundamentally, are schools really there to provide vocational training? Many people struggle to find a cleaner; should cleaning be added to the curriculum?

Computing and Computer Science seem to be getting the blame, but are they really the cause of the decline in “digital” skills, or are they being made a scapegoat by non-specialist teachers who don’t want to teach those subjects? Haven’t employers always complained about lack of skills?

Almost half of young people go on to higher education, so the first point we could make in Computing’s defence is that, if half of the newly-employed young people who lack the skills are graduates, they will have been 16 at least five years ago – i.e. when ICT was still a National Curriculum subject.

IT is always in a state of change, and I think that there’s a danger of conflating a number of issues.

When I first started teaching, back in 1997, not all students had a computer at home, and using a computer at school was a novelty. Students had lots to learn, and they were enthusiastic about ICT lessons. I should also point out that the ICT National Curriculum always contained a programming element (called Developing ideas and making things happen), and my first ever lesson was teaching functions in BASIC to a year 9 class.

In the early days of home computers, Bill Gates said that it was his dream for the PC to become just another household appliance, like a television or fridge. Fast-forward a few years and that dream became reality. By the early noughties it was unusual for a household not to have a computer. Paradoxically, though, I started to notice a decline in students’ IT skills. My assumption was that if you’re already familiar with something at home then you don’t want to be told how to use it when you get to school; you don’t want to be taught how to use your computer any more than you want to be taught how to use your fridge. So I would estimate that “digital skills” had already been in decline for a decade before Computing was added to the National Curriculum.

It was at about this time that the KS3 Strategy was introduced in response to concerns over the lack of skilled teachers to deliver the ICT curriculum. It wasn’t compulsory, but gave the impression that where we were previously teaching relational databases, we could now get students to review web-pages and talk about the difference between a fact and an opinion instead. It’s at this point that a lot of teachers who are resistant to teaching Computing appear to have joined us. It’s also the point at which portfolio-based qualifications, such as DiDA and OCR Nationals, were introduced, removing the need for students to master any ICT skills.

I often get to see students’ own computers, and in the last ten years I’ve started to notice that those computers contain fewer and fewer files that were actually created by the students themselves. It wasn’t that uncommon for a student’s My Documents folder to be completely empty because the laptop was only used for social media, YouTube and games.

It made sense, therefore, for these students to switch to other devices, and we’ve seen a proliferation in the number of alternative platforms – phones, tablets, Chromebooks, etc. Once again it’s not unusual for households not to have a PC, and even for students to only use iPads in primary school. In the Autumn term a year 7 student told me that he’d never used a computer before.

When I asked in a forum what “digital skills” an employer might think was lacking in our students, one suggestion was keyboard/mouse skills and filing. iPads don’t have keyboards or mice, and have no accessible filing system. Typing is a skill, and speed comes with practice. If a school has one KS3 lesson per week and we remove exposition time and any activities that don’t require a lot of typing (spreadsheets, editing images, etc.), then students might type for an average of maybe 15 minutes per week – that’s not enough to master a skill without practice at home.

But what skills are employers really looking for? I’d imagine that after a higher education course most candidates would be able to type at a reasonable speed. They might want “transferrable skills”, such as copying and pasting, but I’d be surprised if most students couldn’t do that by the time they left school.

Most jobs require a limited range of skills, and possibly the use of bespoke systems. This may only require a small amount of training; if you employ someone that doesn't have the spreadsheet skills you require, but is otherwise suitable for the job, you can probably show them what to do quite quickly.

The same cannot be said of algorithmic thinking skills. Critics of Computing say that programming is just a niche subject for people who want to work in the software industry, but they’re missing the point. A program is just the measurable outcome of a thought process, and it’s the thinking that we want to encourage.

Programming is about problem solving and algorithmic thinking, and more people need algorithms than need spreadsheets; more people sort things or plan routes than create "multimedia products" in PowerPoint. Computer Science is relevant to people who don't even use computers, such as the Travelling Salesman and the Chinese Postman. In their book, Algorithms To Live By, Brian Christian and Tom Griffiths even go one step further and say that computer science principles are not only useful for sorting our CDs, but that thinking like a computer scientist can help with the big decisions in our lives.

I think that the Computing curriculum is a massive improvement over the ICT one that preceded, but I know that not everyone agrees. If you get frustrated that students can’t type or name their files properly, though, make sure that you’re not getting swayed by your own confirmation bias and desire not to learn what object-oriented programming is, and ask whether there are factors at play other than the curriculum.

Wednesday, 19 July 2017

What is Progress?

If you frequent teachers' forums, such as those in the TES Community, you'll regularly see colleagues asking for advice on how to record and monitor "progress".  But what does that mean?  How can we measure "attainment" or "progress", and what would the measurement look like?  Are there SI units for progress?

One view is that, when creating a course or scheme of work for a particular subject, you should first ask yourself what it means to be good at that subject.  What does it mean to be good at Computing, for example?  This will inform your curriculum and tell you what you need to teach, but it will also give you an idea of your success criteria – if someone needs to do A, B and C before they're good, then surely a student that can do A, B and C will be good.  Or will they?  Is it that simple?

How do you know when you're good at something?  Is it when you can do it once?  When you can do it consistently?  Or when you feel confident in doing it?  If you learn A one year, and then learn B the next, have you made progress?  Even if B is actually much easier than A?

One of the problems with this approach for our subject is that there's disagreement about what Computing is.  We've got different ideas about what it means to be good at Computing – I've said before that I will feel that I've done my job as a Computing teacher if a student leaves my classroom, looks at something and says "I wonder how that works?"  However, I've never seen an exam or baseline test that measures that particular skill.  In fact, a lot of the "baseline" test that I see measure things that I don’t consider to be Computing at all. 

We all know that OfSTED wants  to see "progress", but what is it?  Is it getting better at something, or just getting further through it?

With the old National Curriculum it was easy; you match the students work against the descriptions in the subject-specific document and gave it a number.   Or was it that easy?  I never really heard a complete and satisfactory description of how to give a student an overall level that didn't include the phrase "professional judgement" and/or "best fit".  Measuring the length of my desk doesn't require "professional judgement" – it just requires a tape measure.

You could only really give a meaningful level in a single area of the curriculum – if  a student programmed efficiently then they were at level 6, if they didn't, they weren't.  Generating an overall level, which is what some schools and parents required, was more tricky.  What about if something hadn't been taught at all?  What about if a student was a solid level 6 for Finding things out, Exchanging and sharing information and Reviewing, modifying and evaluating work as it progresses, but has never done any Developing ideas and making things happen?  I was once told by a senior HMI inspector that under those circumstances the student would be level 6 overall – but if the same student had done a small amount of Developing ideas and making things happen and were maybe working at level 3 in that area, then their overall level would be reduced.  Knowing more reduces their level?  Surely that can't be right?

At least the old National Curriculum levels indicated a proper hierarchy of skills – students working at level 6, for example, were working at a higher level than students working at level 4.  Or, put more simply, the level 6 things were "harder".  A level 4 student could "plan and test a sequence of instructions to control events in a predetermined manner", whereas a level 6 student could also "show efficiency in framing these instructions, using sub-routines where appropriate."

The "post-level" world seems to be dominated by teachers (or, more probably, school leaders) that still want to give everything a level, and schools and other organisations are creating their own systems of assessment, such as the CAS Computing Progression Pathways.

What I notice about many of the new "levels" is that they're not hierarchical.  CAS give theirs colours, rather than numbers, perhaps to indicate this, but they still use arrows to indicate order and "progress", even though some of the later skills seem to be more straightforward then the earlier ones.  For example, "Understands that iteration is the repetition of a process such as a loop" is two levels higher than "Designs solutions (algorithms) that use repetition and two-way selection i.e. if, then and else", which seems a bit strange when Bloom tells us that understanding comes before application.  Also, if there are multiple strands in your subject, how do you ensure that the same levels in different areas are equivalent?  Is understanding recursion really at the same level as "Knows the names of hardware"?

Some schools have started to use numbers that relate to the new GCSE grade descriptors.  In my experience, SLT members tend to be teachers of English or humanities.  If you look at the GCSE grade descriptors for English, for example, you can see how that might make sense – they describe a finely-graded set of skills that a student might possess, and you might be able to see or predict whether your student will develop those skills over the coming years.  English, though, has a limited range of skills – reading, writing, spelling, etc. - that you can apply to varying degrees.

Compare that with the grade descriptors for Computer Science – a grade 2 candidate will have "limited knowledge" and a grade 8 student will have "comprehensive knowledge".  They're basically saying that the more you know, the higher your grade will be.  I recently counted 120 skills that needed to be taught in a GCSE Computer Science specification.  How many would constitute "limited knowledge" and how many "comprehensive knowledge"?

When a student starts a course they will, I would have thought, have "limited knowledge".  If you teach ten of the 120 skills and a student remembers and understands them all, what does that mean?  Can you extrapolate and say that they'll know all 120 by the time of the exam and have comprehensive knowledge?  But didn't you start with the easier topics?  Can you be sure that they'll understand all of the rest?  How about a student who understands half of the first ten – can you assume that they'll understand half of the remaining 110?  Or that they'll understand fewer because they'll get more difficult?

For this reason, I've never understood why teachers (particularly my Maths colleagues) say things like "This is an A*/level 8 topic…" Yes, that may well be a topic that students find more tricky than most of the others, but how does that equate to a grade?  The only thing that we can be sure about is that the more questions a student answers, the higher their grade will be – if they answer half of the questions, they'll get half marks, regardless of whether it's the easier half or the harder half.  If they answer only the A* questions, then they'll most-likely get a U.

Another issue to consider is the nature of the subject.  With some subjects – English and Maths, for example – there is a high degree of continuity between KS3 and KS4.  With some, e.g. Science, the underlying principles are the same, but the content is different, so a student will justifiably have "limited knowledge" of GCSE content for the whole of KS3.  Some subjects, e.g. Business Studies, don't exist at KS3, and some, e.g. PE, are completely different at GCSE level; no-one does a written PE exam in KS3.

If none of the previous or current methods is really ideal, how are we to measure progress?  Here's one last conundrum.  Is what we see actually the thing that we're trying to measure?
This is a particularly interesting question in Computing.  One of the things we're trying to teach, for example, is computational thinking.  What does that look like?  What the students might produce to evidence their computational thinking is a computer program – the output isn't quite what we're trying to measure.  One of the other things I've consistently observed is that confident and able programmers tend to make their applications user-friendly, rather than technically complex; again, that's not always something that we always see in schemes of assessment or lists of skills to be acquired.

I had an interesting discussion with my daughter's Maths teacher at a year 5 parents' evening.  My daughter had recently moved from a school where they'd learnt formal methods for multiplication and division nearly two years earlier than students at the new school.  "Yes," said the new teacher, "but we teach understanding first…"  Really?  Can you teach understanding?  

Bloom's Taxonomy tells us that remembering comes before understanding.  Anyone who's had (or seen) a baby will know that we learn by observing and copying, and come to understand later.  If at all.  How many of us can happily use the formula πr² to calculate the area of a circle without understanding why it works?  There isn't even agreement about what it means to understand something, yet alone how to assess understanding.

The new National Curriculum leaves it up to schools to decide how to assess progress.  When making that decision, here are questions that I would ask about the system devised:

  • Is it suitable for all of the subjects taught in your school, both content-based and skills-based?
  • Is it really possible to give an overall summary of the whole subject?
  • If the subject is broken into "strands", what should they be?  I break my Computing course down into sections such as Representation of data, Maths for Computing, Algorithms and programming, Networking and Information Systems, for example.  These do overlap, though – e.g. where do you put routing algorithms and compression techniques?
  • Does giving everything a number make sense?  How do you equate skills from different areas of the curriculum, for example?  Is understanding recursion more or less difficult than adding binary numbers?  Does a higher number indicate something that is more difficult, or just that it's further through the course?
  • Are you measuring what you actually want the students to learn?
  • Will students and parents easily understand their results or scores?
  • Should students be involved in the process?  There is a controversial idea that students are able to assess themselves.

It seems implicit that the will of the DfE was to do away with the use of numbers to describe a student's attainment.  The system that my colleagues and I devised is both simple and easy to understand.  It resembles the APP method, except in one crucial respect – we don’t convert the results into a number at the end.

For each subject we have a bank of skills that the student might demonstrate.  For Computing these were based on the ones in the CAS Computing Progression Pathways document (with the apparent duplication removed and some extra bits added).  For each "skill", we record whether the student can sometimes do something, or whether they can confidently do it.  We can then report that a student can do X this year when last year they couldn't, or that he/she is now doing Y confidently when last year they only did it sometimes.  There's no aggregation – it's easy to record and easy to read.  That system might not suit you and your school, but it shows that recording doesn't need to be onerous and you don't need to label every student's attainment with a single number.

Tuesday, 21 March 2017

Why Computing Unplugged?

I used to liken the difference between computer science and ICT to the difference between architects and bricklayers; the existence of the latter is a consequence of the former, and it's also the role of one to implement the more conceptual ideas of the other. I was never entirely happy with that analogy, though, because there are also pay and status connotations that I hadn't intended.

Since Computing replaced ICT in the 2014 National Curriculum I've changed the way I think about the difference. One of the myths that has developed is that computer science is a niche subject only suited to those hoping to work in the software industry, while ICT is better-suited to the wider population. Nothing could be further from the truth, of course – the opposite is actually the case. The type of ICT taught in schools is really a vocational subject for students destined for desk-based, administration-type jobs, whereas computer science is for everyone.

People search for, and sort, things, plan routes, apply logic, etc., in all kinds of contexts that don't require a computer. Algorithms are not just for programmers, they're also for Chinese postmen and travelling salesmen (to name two of the most famous problems in Computer science).

Nothing illustrates this point better than the existence of "Computing Unplugged". While it's perfectly possible to demonstrate key Computer science techniques without using a computer, the same is not true of ICT. You never hear about "Spreadsheets Unplugged", or "PowerPoint Unplugged", for example.

Undertaking activities away from the computer not only shows that computer science isn't just about computers, and can be relevant to manual or off-line tasks, but it also helps to break the link between Computing and "playing" on the computers – it makes Computing look like the other subjects and enables us to practise our traditional "teaching" skills.

Sorting algorithms are a computer science staple, but physical items also need sorting. One of one of my favourite unplugged activities, therefore, is sorting. Anyone with a collection of records or books might have wanted to put them in order, but what's the quickest way? What about bigger sorting tasks?

When I first started teaching, we still hand-wrote reports with a pen. We completed an A4 sheet per subject per student, and put them in a tray in the staffroom for tutors to collate. Not only were tutors supposed to arrange the reports by student, but we were also supposed to sort the subjects into alphabetical order of name. A colleague was complaining one day that it had taken her ages to sort them, with the sheets spread out around her dining room, but I'd done it all in a ten-minute tutor period with the help of the students. I'd used an efficient parallel sorting algorithm and she hadn't. No computers were involved.

In their book, Algorithms To Live By, Brian Christian and Tom Griffiths go one step further and say that Computer science principles are not only useful for sorting our CDs, but that thinking like a computer scientist can help with the big decisions. Computer science is all about coping with limitation, just like life. PowerPoint can't help you with that.

In Finland, students routinely learn about computer science without computers. Computing principles can be applied in a variety of contexts – they learn about algorithms, for example, through knitting, dancing and music (maybe you could try an unplugged version of the Twelve Days of Christmas activity?). Even software companies are applying their expertise in non-computing contexts – Microsoft is using its computer science expertise to try to cure cancer, for example.

I was recently invited by course leader Jonty Leese to a Subject Hub of Excellence (ShoE) day at the University of Warwick's Centre for Professional Education. ShoE days invite experts in their subjects to share knowledge and expertise with trainee teachers. This is a good opportunity for students to look beyond the narrow requirements of the National Curriculum for Computing and expand their repertoire of teaching techniques. It also helps them to develop an understanding of what Computing is, which something that experienced teachers converting from other subjects and going straight into teaching examination courses don't always get the time to do.

The majority of the ShoE day I attended was about computing without computers. Amongst the things we looked at were creating logic circuits from dominoes, sorting networks, and using a card trick to explain the concept of parity. Some of these examples will help students to visualise things like logic gates, or to understand concepts such as parity before applying them in a more technical context.

I particularly liked the card flip magic task – students aren't always aware of things such as loose connections, electrical interference and the importance of error-free transmission, but they can appreciate a good trick and how it works.

The essence of Computing is really "how things work", so I don't see why we can't also take lesson time to explain anything from any subject area – when my daughter was learning about the Romans in year 4, for example, we discussed an algorithm to turn numbers into Roman numerals. Similarly, when she had a year 6 Maths homework to find "perfect numbers", we also took the opportunity to think about and write an algorithm for working out whether a number was perfect.

Some topics lend themselves nicely to analogies and off-line examples, but you also need to take care not to make the examples too obscure. I made my Words of Wisdom page, for example, in an attempt to demonstrate to students the difference between validation and verification, but it turned out to be a bit too abstract for most of them.

Another consideration is whether the benefit gained from an unplugged activity is worth the time and effort it requires. Take this examples of a four-bit adder made of cardboard, for example. If computing principles can be (or even are mostly) applied in non-computing contexts, then Computing Unplugged can be an excellent way to get students to learn and apply them. Technical processes, such as addition or the fetch-execute cycle, are only done inside the computer and, in my opinion, are probably best kept there – just tell the students how they work and save the lesson time for tasks that enhance computational thinking.

One of the downsides of Computing Unplugged is that it does often require unusual and infrequently-used resources to implement – large sets of dominoes or sets of 36 double-sided cards, for example.

What I do is take some of the unplugged activities and plug them back in to create web-based resources, such as card flip magic and the sorting balance. This might seem to be a bit contradictory, but the students still see the concepts in a (simulated) non-computing context and the whole class is able to undertake the activity at the same time.

Teaching computing without computers also helps Computing teachers to practise more traditional teaching skills (as used in other subjects), and to develop confidence in discussing concepts and ideas, such as the deadly ancient maths problem that computer scientists love, away from the computer without a practical demonstration. Having a Computing lesson without computers isn't something to be afraid of - it's an opportunity for both you and your students.

If you're not sure where to start, the CS Unplugged web-site contains ideas and activities. If you're not quite ready to take the plunge with a whole lesson, why not discuss a traditional problem, such as the river-crossing puzzle or the Towers of Hanoi, or give a logic puzzle from a "brain teaser" book as a starter and see how it goes?

Thursday, 1 December 2016

Are You a Computer Scientist?

Back in 2000, Channel 4 started to broadcast a series called Faking It, in which people would receive brief-but-intensive training to try to pass themselves off as an expert in an unfamiliar field.  The introduction of the new National Curriculum in 2014 led to some ICT teachers, and in particular those teachers who had moved into ICT from other subjects, feeling like they were starring in an episode themselves.

There was some initial resistance, but two years on I think that most teachers have moved around the cycle of acceptance and have started to accept Computing as a subject.  They've read up on various Computer Science techniques, learnt to program, and are now asking in forums not what to teach, but the best way to teach certain topics.  

One of the things that bothered me most when I left my previous career in the software industry to become a teacher was that I could no longer really tell whether I was doing a good job.  If you're a programmer, you can tell when your program doesn't work.  You can tell how well your program works, for example, by using a stopwatch or looking at the size of the resulting file.

Teaching appears to be more subjective – what works seems to be open to debate.  In the space of little more than a week, for example, the TES told us both that a curriculum which is over-reliant on pen and paper, timed exams and memorisation will not suffice for the 21st century and that more emphasis should be placed on memorisation to lay the foundations for more complex tasks.

You might be confidently delivering a course, and your classes might be getting good results (which is obviously a good thing for your students), but not everything is in the specification.  As Donald Rumsfeld famously said, "You don't know what you don't know", and there can still be some obvious signs that you're "faking it" even if you can teach all of the topics.  The new GCSE specifications are more explicit and help with the subject content, but what's missing is a sense of the subject's underlying philosophy. 

I frequent a number of teaching forums, and when I joined a new one last year, the first discussion that caught my eye was about a particular coursework task for GCSE Computer Science.  Several posters had proposed a similar solution, but I could see that there was a much more efficient way to approach the task, and I pointed this out.  The other contributors immediately responded that efficiency wasn't one of the requirements of the task.

That was true, the task didn't explicitly mention efficiency.  It didn't need to, though - efficiency is the raison d'être of the whole subject

This was nicely demonstrated in last year's BBC4 programme, The Wonder of Algorithms.  The NHS and the University of Glasgow's department of Computer Science had worked together to produce a computer program to match people in need of an organ transplant with suitable donors.  The program worked well and the doctors and patients were delighted that everyone had been matched with a new kidney.  The computer scientists were disappointed because it had taken half-an-hour to run.

Computer Scientists, you see, consider efficiency at every available opportunity, not just when questions and tasks ask them to.  The biggest difference between ICT and Computing is that ICT was more concerned with how things looked, while Computing is concerned is how things work.  Refinement in ICT was about how to make your output's appearance better suit the audience, whereas refinement in Computing would mean getting your program to use fewer resources, with resources being things such as processor time, memory, disc space or bandwidth.

One way that you could remind yourself to consider efficiency is to use a really slow computer.  Dijkstra famously said that the advent of cheap and powerful devices would set programming back 20 years.  He was right – computers today are so fast that for most tasks that we don't need to think about efficiency, and have so much memory that we don’t need to think about saving the odd byte here or there.

Unnecessary repetition is usually the biggest waste of processor time, but complex calculations can also use a lot of processor time, particular on a slower computer.  When I was a teenager in the 80s, for example, even drawing a circle was something that needed to be done carefully; trigonometric functions (e.g. sines and cosines) take longer to calculate than squares and roots, so it can be quicker to use Pythagoras' theorem.

I recently came across a discussion of this task in a forum for Computing teachers:

A student has a Saturday job selling cups of tea and coffee. The tea is £1.20 per cup and the coffee is £1.90. The student should keep a record of the number of cups of each sold. Unfortunately it has been so busy that they have lost count but they know that they have not sold more than 100 of each and the takings are £285. Create an algorithm that will calculate the number of cups of tea and coffee sold.

By the time I saw the question, there was already a number of responses, all suggesting the use of nested loops – one each for tea and coffee, both counting from 0 to 100 and multiplying by the cost of the drinks to see whether the total was £285.

I was a bit surprised that everyone had suggested the same solution as it's wildly inefficient – the program would loop 10,000 times to find the answer, so I proposed a solution that found the answer in about 14 iterations.  As one amount decreases, the other would increase, so the quickest way to find the solution would be to start with 100 coffees and count down until you'd need 100 teas to reach £285; you could then work out the cost of the coffees and see whether the difference between that and £285 was a multiple of £1.90 (using modular arithmetic).   I tried both solutions in Python on a new-ish laptop, and both took a negligible amount of time.

Having learnt to program in the 80s, though, I converted both programs into BBC BASIC and ran them in real-time on a BBC Model B emulator – a really slow computer by modern standards.  The difference was clear – the single loop took 0.13s, the nested loops solution took well over a minute.

To be fair to the other forum contributors, though, it later turned out that the problem in question did actually come from a worksheet on nested loops.  That doesn't mean that it's an appropriate use of nested loops, though – it's quite common for opportunists to try to make money from new developments in education.  Those of you who remember the introduction of video projectors will also remember that schools were deluged with adverts for "interactive whiteboard resources" (i.e. poor-quality PowerPoint presentations) shortly afterwards.

When the Computing curriculum first appeared, I seriously considered using the BBC Model B emulator to teach programming to my KS3 students, precisely because it's so slow.  It was only the complicated procedures for editing and saving programs that led me to look elsewhere.

When you write a program, you can measure how quickly it runs with a stopwatch, and generally the less time it takes, the better.  Recently, though, Linus Torvalds has been talking about a slightly more abstract concept – "good taste" code.  To summarise, it seems that applying the good taste principles really just involves thinking about your algorithm carefully to create a general function that works under all circumstances without the need for ifs to handle exceptions.  While this might be a bit too abstract for KS3 classes, it's probably worth a mention to GCSE and A level classes.

Finally, the other thing that fascinated me when I first became a teacher is that teachers are often asked to do things for which there is no evidence - from accommodating different learning styles to "deep" marking.  

As a Computer Scientist, I not only examine my programs and web-pages for efficiency, but I also also want to teach in the most effective way possible.  I would find myself asking things like "Where's your evidence that a three-part lesson is better?", "Are starters really effective?", or "Is open and closed mindset training the result of a proper study or is it the new Brain Gym?"  A surprising number of colleagues didn't ask those questions.

I was recently taken aback to see someone asking, in a Computing forum, whether other teachers had consider "making CPUs with boxes and string" when teaching the fetch-execute cycle, and not only that, but a number of people had replied to say that they liked the idea.  Now, there aren't necessarily right and wrong ways to teach things, as mentioned in paragraph 3, but no-one else seemed to question why you would do this, or whether it was a good idea.  Knowing that we remember what we think about, and that a model of a CPU made with boxes and string would neither look, nor function, like the real thing, I could think of a reason why making such a model might not be effective; no-one could suggest why it might be.

I've hinted in previous articles that I'm a fan of evidence-based practice, and in particular the work of John Hattie and Daniel Willingham.  I thoroughly recommend Why Don't Students Like School? as a guide to using cognitive science to improve your lessons.  I've written previously that I don't like projects, and that challenge and repetition are more effective than "fun".  These ideas have met with some resistance from colleagues, but I didn't make them up – they were based on research that I'd read (and to which I'd linked in the text).  Next time you either like (or don't like) an idea, why not release your inner scientist and see if there's any evidence to back it up (or refute it)?

PS.  After I wrote this, the following article, which touches on similar themes, appeared in the TES - ‘It's time to speak out against the ridiculous amount of poppycock we are spouted at education conferences'

Thursday, 10 November 2016

Challenge and Repetition Are Better Than Fun

It's always nice to hear that you're accidentally doing the right thing. As a coffee-drinking vegetarian who likes spicy food, it seems that I've inadvertently become almost immortal, and a similar thing has recently happened with my teaching methods. I've always taken a traditional approach to my teaching, and recently a number of articles have appeared that support my views.

There are regular posts in teaching forums asking how to make lessons in Computing more "fun". I often answer that the best thing we can do for our children is teach them to cope with boredom, and I'm only partly joking. A more serious answer is that aspects of our subject are not "fun", but technically and ethically complex.

Are "fun" lessons better for students, or do lessons actually need to be technically and ethically complex? Are we doing students a disservice by prioritising "fun" over challenge?  Is the real reason that children are so easily bored and frustrated that we over-stimulate them and they have an over-reliance on technologyIs boredom actually something to be savoured?

I previously mentioned Willingham's biscuits – a US teacher tried to spice up lessons on the Underground Railroad by getting students to bake "bisuits" to remind them of the diet of escaped slaves, but all they remember was that they'd followed a recipe and done some baking. You remember what you think about.
The TES reported recently that challenging lessons aid long-term retention, but this idea isn't new – the term "desirable difficulties" was first coined by Robert Bjork in 1994 and other studies have shown that effortful processing improves retention.

The idea that making things more difficult can improve learning might seem counter-intuitive. One of the most surprising, perhaps, is that even your choice of font can have an impact. Teachers are often told that we should use the Comic Sans typeface, for example, because it's easier to read and closely resembles handwriting. It turns out, though, that difficult-to-read fonts improve retention (strangely, that study uses Comic Sans as an example of a font that's harder to read!). One theory is that your brain is subconsciously saying, "this is difficult, so it must be important"!

When my daughter was a baby, a number of videos (most famously the Baby Einstein series) were available that were supposed to help children learn to speak more quickly. It turned out that these were no more effective than leaving your child in front of an episode of the Simpsons, and could actually delay speech.

What does encourage children to speak more quickly, though, is exposing them to speech radio. Not being able to see the speaker's mouth increases the cognitive load for the listener; making it more difficult to listen to speech helps infants to learn.

After learning to speak, we go to primary school and learn to read. The learning of facts has become unfashionable in recent years, with some primary schools preferring a play-based curriculum. Cognitive Psychologist Daniel Willingham (of the University of Virginia) tells us that facts are more important for comprehension than reading skills - students who know more will out-perform their peers (and those peers will never catch up).

Helen Abadzi of the University of Texas also tells us that rote learning is essential for a child's education – but play isn't.

Repetition has traditionally been viewed as essential to learning, but the recent push for "progress" has made it unfashionable to say "practice makes perfect" (although "mastery" was a focus in the 2010 National Curriculum that was scrapped by the coalition government).

A study of people developing their programming skills in the software industry found that practise does indeed make perfect – programmers who completed twenty programming challenges performed twice as well in tests. The implication for our lessons is that students need more programming tasks, not longer projects.

But how many repetitions are required for retention in long-term memory? I read earlier in the year that you need to hear things four times before you remember them, other articles suggest seven – although I'm sure that it depends on the complexity of what you're trying to remember.

What can be more important than the number of repetitions is their timing. There is evidence that spaced repetition is more effective – i.e. you are more likely to remember things if you hear them spaced by increasing intervals. This spacing regime can be difficult to effect in a classroom setting, but it does suggest that blocking similar lessons together might be a less effective approach, and that mixing up your teaching can get better results. This might sound similar to the spiral curriculum that Maths teachers have been talking about for years.

Computing lends itself nicely to this approach. I've written previously that I believe that the representation of data is something that links together most of the topics in Computer Science, and my Computing concept map shows how other topics are linked. This allows us to seamlessly flow from one topic to another in a way you couldn't do with ICT, but also means that you can frequently revisit previous topics in a way that's relevant and helps students to remember and practise them.

As Computing specialists, we have an array of technology available to us, and the temptation can be to over-use it. It's a relatively new subject and we might feel that we have to use new techniques, but don't forget that challenge, repetition and mastery have been helping us to learn for centuries, and there is still evidence to support their use.

Thursday, 29 September 2016

What is Object-Oriented Programming?

So, you think you're getting the hang of the programming basics - you understand the key techniques, you might have even experimented with other uses of arrays, and then someone mentions "object-oriented programming". You've done a quick search using Google - it's in the A level specification and it's all sounding a bit complicated. What doesn't help is that explanations tend to use obscure examples - there's a lot of vehicles - and if you don't really understand these objects then you won't know what to do with them. I'm going to attempt to give you a simple overview of object-oriented programming using something with which we're all familiar - rectangles.

What Are Objects?

You might have written a program in which there are multiple occurrences of the same type of "thing" - e.g. a game with several balls. You might position the balls using x- and y-coordinates, and they might have a size, a colour, a speed and a direction. There might also be things that we want to do with those balls - move them, check for collisions, get them to bounce, etc.

With "traditional" programming techniques, you'd do these with variables and functions. That's six pieces of information per ball, all of which need to be stored in variables. You could call them x1, x2, y1, y1, size1, size2, etc., or you could use arrays/lists of these values. You'll then need to create the functions for moving, colliding, bouncing, etc., and tell these functions which ball to modify. It can all get a bit messy.

In programming, objects are just data structures that represent "things" in the program - in this case a ball. The use of objects can tidy up all of these multiple variables and functions into a nice tidy package and make them easier to manage. Each ball, as an object, will have properties, such as the position, size, colour, etc., and methods, which are things that we can do to the ball, such as changing direction. Really, though, properties and methods are still variables and functions, they're just tucked away, out-of-sight, inside the object.

So What's a Class?

When people first read about object-oriented programming, the difference between classes and objects can seem quite confusing. Most books talk about an object being an "instance" of a class, but here's perhaps a simpler way to think about it.

If I look up the word rectangle in the dictionary, I see an explanation of what a rectangle is - but I don't see an actual rectangle. A definition isn't the same as an example.

A class is, effectively, a definition of what an object will look like. We can describe what a rectangle is, and some of its properties (height, width, etc.), but that doesn't give us a specific rectangle; it's just a pattern or template for making a rectangle. I can then use the class to create an actual rectangle by specifying a height and width, and this becomes the object.

An Example - Rectangles

I'm going to use Python for my example because it appears to be the most popular language in schools at the moment, but the principles also apply in many other languages - I first came across them in C++ in the 1980s. You can download the example Python script here (right-click to download rather than opening in the browser), or view it in Repl.it.  There is also a video introducing this concept on the Computing and ICT in a Nutshell YouTube channel.

First we need to create the class from which the rectangles will be created. It tells Python what properties a rectangle will have, and what methods. There is a class keyword that defines the class, but inside the class definition the properties and methods should look familiar - they are just variables and functions. The main difference is in how they are used - they are accessed using a full-stop between the name of the object to which they apply and the name of property or method. For example, if a is a rectangle, you can access its width property using a.width. This is useful because rectangle b will also have a width, and we can use the same name for it, e.g. b.width.

The function definitions inside a class have an extra, "special" argument, called self (which you don't actually use when calling the function, it just helps you to know which object it is that you're referring to). The property self.width is therefore just the width property of the rectangle that you're working with on the moment. The example will hopefully make this clearer.

There are also "special" or "magic" functions that you can use to define how your objects behave, and these have names that begin and end with double underscores. You will need at least one of these, __init__(), as it describes what to do when the object is created (or initialised).


Here's a example basic class definition for a rectangle:

class rectangle(object):
      def __init__(self, width, height):
           self.width = width
           self.height = height

Note that self is the first argument of __init__, but that I've also included width and height. I've included these because that's the minimum information required to define a rectangle. The two lines in the function set the width and height properties of the rectangle to be the values of width and height passed to the function.

I can now use a command such as a = rectangle(3,2) in my program - or in the IDLE shell - to create a rectangle based on this class. It will have a width of 3 and a height of 2. I can view the width of a by printing a.width, or I could change the height to 4 with a.height = 4.


Here's where the rectangle example comes into its own. I'm not really sure what to do with a truck class, but I can think of some things that I might want to do with a rectangle. These things are the methods that I can add inside the class, and might include calculations such as finding the area or perimeter of the rectangle. These definitions would go directly below the __init__ function and would be indented to the same level. Aside from the use of self, these behave exactly like standard Python functions and may or may not return a value.

def area(self):
      return self.height * self.width

def perimeter(self):
      return (self.height + self.width) * 2

The method a.area() now returns 6 (i.e. a.height x a.width), and a.perimeter() returns 10 (twice the sum of a.height and a.width). Note that the parentheses are required for methods.

In the downloadable example there are further methods, including some that return no value (e.g. rotate() and enlarge()), and some that return a Boolean value (such as square(), which tells you whether the rectangle is a square).

Operator Overloading

Programming languages know how to compare and perform calculations with variables of standard types. With integers, for example, if a = 2 and b = 3 then a + b is 5, a == b is false and b > a is true. But what if a and b are rectangles? Does it make sense to add two rectangles? How can we tell if two rectangles are the same, or whether one is "greater" than another?

This is what operator overloading does. Operator overloading is a somewhat obscure term that just means redefining standard functions and comparisons so that they can cope with the objects that we've created.

In Python they use some more of the "magic" functions that I mentioned earlier. In the rectangle example I have included:

#equal to
def __eq__(self, other):
      return (self.width == other.width and self.height == other.height) or (self.width == other.height and self.height == other.width)

# less than
def __lt__(self, other):
      return self.area() < other.area()

# great than
def __gt__(self, other):
      return self.area() > other.area()

So now I can have a = rectangle(3,2), b = rectangle(4,5), and c = rectangle(2,3) and compare them - a == b will be false, but a == c will be true, b > a will be true, and b < c will be false.

It's up to you to decide how these comparisons are made - or if they can be made at all. I have decided that two rectangles are equal if their heights and widths are the same (or one is a rotation of the other), but when using > and < I am using the area, so that a > b will be true if a has a greater area than b.

When comparisons are made, we are comparing two objects - in the arguments for these functions, self refers to the first object and other to the second, so when we check whether a > b, the function returns whether a.area() > b.area().

Finally, if a = rectangle(3,2), then using a on its own (e.g. print(a)), or doing things like int(a) or str(a) will result in an error. You can define in your class what should happen under these circumstances:

# define the "printable" version of the object
      def __repr__(self):
      return str(self.area())

# what happens when you use int() on the object
      def __int__(self):
      return self.area()

# what happens when you use str() on the object
def __str__(self):
      if self.square():
      return "square"
      return "rectangle"

Printing a will now give me 6, as will int(a), but str(a) will return rectangle - because that's what I've decided it will do.

Hopefully this has give you a sense of what objects are and how they can be used in your programming. A level students will be expected to have an understanding of these ideas.

There's plenty of help available on-line, particularly if you want to overload an operator that I haven't included here. You might also want to research inheritance - this is where one class is based on, or is a variation of, another. For example, I might want to have a more generic shape class, including extra information, such as the number of sides, and base my rectangle class on that, setting the sides property to four in __init__. I could then also have a triangle class, setting sides to three, etc.