KS3 – SoW and Assessment Strategy

I’ve noticed a lot of chatter on Facebook lately about assessment at KS3 and about what to put into schemes of work / schemes of learning.

Since my department and I have spent a lot (a LOT!) of time over the last couple of years completely reworking all of the above I figured it wouldn’t do any harm to share it. It’s a team effort and includes some fantastic ideas and units for which I can take no credit at all. No warranty is given or implied and your mileage may vary!

If you want the resources with none of the reasoning or justification then just head on over to pi.mwclarkson.co.uk and download away. If you DO care about the justification (which I think it quite important, as it goes), then read on.

Thematic Units

For a long time we used to teach a half-term on spreadsheets, a half-term on databases, a half-term on image editing, etc. And the visit each topic again in maybe a year, maybe 18 months. This meant we could spend a good chunk of time focusing on one area, but the retention was poor.

We decided a little while ago to try more thematic units – so we have a unit about my Aunt Mabel who bought a zoo on a whim. She needs a spreadsheet to find out if she can afford to feed the animals, some image editing to create a gift voucher, a database for annual membership, etc.

When specifying the equipment needed for a new youth club the students design a floorplan, create a spreadsheet to track and adjust costs, write to their local MP, learn about networking and create a slideshow to convince the PTA to help fund it.

And so on – the key phrase for me is ‘little and often’. The disadvantage is that students don’t spend a big block of time looking deeply at the skills, so you need to remember to make sure to teach about slideshows and DTP skills, not just expect students to ‘know’ what good design looks like and what specific skills to use.

We’ve also gone for an approach that includes a fair bit of computer science (programming, binary, logic gates, algorithms) but also a lot of multimedia topics (mind maps, storyboards, image editing, comics, video editing, audio editing) and ‘traditional’ IT (spreadsheets, databases, posters and PowerPoints). This is partly because we have 3 routes at KS4 – GCSE CS, Cambridge Nationals Creative iMedia and GCSE ICT / vocational ICT to come, and also partly because we think (as a department) that our job is to help prepare students for life and for their future, not just an optional GCSE that not all will pursue.

Online vs Dead Tree submissions

Being a massive Moodler I’ve been an evangelist for online assessment for years. We’ve tried online discussions, wikis, self-marking quizzes, ePortfolios and much more. And, honestly, we never got it right.

When it comes to work scrutinies I was often tempted to drop a URL off in each box when SLT wanted the books, but ultimately I had to cave. And I admit it – the books are a better solution.

Each student gets an A4+ sized exercise book and they sometimes do work in there, but more often print off an assessed piece of work. It’s not ideal for animations, but you can include a screengrab which is usually enough to trigger a memory from circulating during the lessons and you can also encourage students to annotate or justify their work, demonstrating knowledge as well as skills. In addition, the kids can find their work and refer back to it easily. Having to negotiate a VLE once a week and expecting the kids to really understand the underlying structure isn’t as realistic as it might sound to those of us who use these systems multiple times a day and might well have computing / IT degrees.

It’s not perfect, but honestly I feel the books are the best solution I’ve used so far.

Regular Assessment / Deep Marking / WINS

The policy at my school is that we do a solid bit of marking every 5 lessons / 5 hours. This means that we don’t have to mark every piece of work, but that students are getting regular feedback throughout their studies.

The structure of the feedback has to be in the WINS format (What was good, Improved if, Next steps and then a Student response). I’ve heard of PENS in a number of schools which is very similar (Positives, Even better if, Next steps, Student response).

We also have a grading system that goes MEP – EP – BEP – UP (More than Expected Progress, Expected Progress, Below Expected Progress, UnderPerforming). This is printed on and highlighted.

Given that one of my colleagues will have 330 KS3 pupils next year we had to make the marking manageable – so we’ve produced one pre-populated WINS sheet for each unit with all of the likely comments written in and 3 differentiated questions for students to tackle that are designed to make students reflect on their work at different levels (think Bloom’s).

I wanted to avoid having students working on something for 5 lessons, then getting some feedback, then spending another lesson making improvements and resubmitting it. You end up in ever decreasing circles and lose valuable time for moving on – and with the ‘little and often’ curriculum we’ll be coming back to those skills again soon enough.

Tracking Progress / Assessment Without Levels

In order to better track progress all of the subject leaders at my place were tasked with describing the knowledge, skills and application that students would be expected to gain each term, all without using levels. These AWoL sheets are heavily skills focused for us and are broken down into the three strands of IT, Media and Computer Science. They relate directly to the unit WINS sheets and are easily attacked with a highlighter once a term.

In addition we have an overall tracking sheet with the 3 strands, each split into 2 (so IT has data handling and presenting information, Media has creativity and planning, CS has programming and technical understanding). By highlighting these at the same time as the termly sheets we can show overall progress.

It costs a bit in highlighters but saves a lot in blue, black, red, green and purple pen!

I’m not promising it’s perfect, and I would never claim this is the ‘right way to do it’ – but it’s what we’re doing and you’re welcome to use it.

If you do decide to adapt and improve it, please consider sharing and please give some credit to the team that helped put it together (Egglescliffe School Computing & ICT department, past and present).

GCSE Computer Science specification roundup

Finally TPTB (Ofqual) have accredited the OCR GCSE specification for computer science. While this was inevitable, I didn’t want to review the specifications until they were all in.

So, here are my thoughts:

WJEC / Eduqas

Pros:

I went to look at this first because I’m still intrigued by the online exam. Assessing programming skills in a timed environment is quite realistic and avoid the dirge of 20 hours of the kids staring at a screen and my having little opportunity to support them. The CA can become an exercise in grinding (akin to repeatedly carrying out a boring task to level up in a role playing game) and so I’ve always thought there should be something like the AQA A Level Comp 1 exam at GCSE, and WJEC are the only board to offer it.

Cons:

It has to be Java and it has to be Greenfoot. The practical exam cannot be carried out in any other language or environment. Now I like Java, and I love Greenfoot. But I’m not sure it’s the right starting point for GCSE. There’s a lot of boilerplate and a lot of syntax (semi colons, curly braces, etc.) which VB, SmallBASIC, etc. and Python avoid. It also means you have to introduce object orientation (explicitly stated in the spec) – which is a big leap for a new programmer IMO.

More worryingly, the exam is in addition to, rather than instead of, the NEA. So you still get the 20 hour dirge on top.

The theory content explicitly states that students need to be able to use HTML. That, in itself, is not necessarily a bad idea, but it’s an extra language and set of syntax rules to learn on top of everything else.

Conclusion:

At this point I’m out. A glance through the theory content looks broadly similar, but I want the practical exam to be instead of NEA, not in addition, and I don’t want to be forced into one environment – at least not if it’s an environment I’m not entirely comfortable with choosing.

Edexcel / Pearson

Pros:

The specification is in line with the other offerings. Two written papers, one 20 hour NEA. The content is similar across all boards and is a notable step up from the previous incarnation (e.g. binary representation now needs to include sign & magnitude and twos compliment representation for negative integers). Reading the sample papers – this new course is going to be hard! But this is true for all boards.

Cons:

The controlled assessment must be carried out without access to the Internet or a school intranet. So no extra help allowed, even if vetted internally. This is the most strict set of rules I’ve seen for this one. You can put copies of appropriate digital documents in home directories so I’m chilled out a little on my 4th reading of the spec.

You are also restricted to one board-set NEA task.

The mark scheme for the NEA gives 24 marks (40%) for implementation and 36 marks for analysis, design, testing, refining and evaluation. Systems lifecycle and consideration for data structures and for testing are important. But that sounds like a lot of emphasis on writing about programming with less than half about the actual programming.

The controlled assessment sample provided was quite vague (again, a common theme). This allows for creativity at the top end but very little support or scaffolding for those who might struggle.

Conclusion:

Theory and exam-wise, it looks much of a muchness. The NEA also looks broadly in line (which is part of the point of the reboot), but the controls are extremely strict. I did find the exam papers looked fairly accessible.

AQA

Pros:

AQA – you know where you are when reading the specification. It’s not the single most important aspect but I find the format of the document very easy to follow.

It’s also the exam board we are using at A Level, so there ought to be some good commonality between the two levels of specification. I always thought that the OCR GCSE legacy spec suited the AQA AS legacy spec extremely well.

Again, familiar content. This time no negative binary numbers, but you do have things like Huffman trees, which is something I will need to investigate myself before I’m ready to teach.

Internet access is allowed (implicitly) for the NEA. The only specific reference I could find was in section 5.2 (avoiding malpractice), which says that students must not copy directly from “the internet or other sources without acknowledgement”.

I’m not sure if this is a pro or a con – my current Y11s have had a really difficult time trying to avoid spoilers, or judge what is a spoiler, on their recent controlled assessment tasks. It’s certainly more open than the Edexcel approach, however.

The sample NEA task looked much more scaffolded than the Edexcel task which is a key issue for those students who need a bit more support and guidance.

Cons:

Only 30 of the 80 NEA marks are for programming, the rest for analysis, design, testing, refinement and evaluation. That’s 37.5%, and I thought Edexcel’s 40% was low!

AQA’s interpretation of pseudocode looks more scary than Edexcel’s. Where Edexcel has lots of text-based output statements, AQA’s sample exam questions look like a sea of syntax that could well put students off.

Conclusion:

Honestly… I think it’s close between Edexcel and AQA. I much prefer the AQA sample NEA task, but prefer the Edexcel exam papers. The theory content is similar, with some subtle differences but nothing that couldn’t be overcome with good planning from the outset.

OCR

Pros:

It’s OCR. It’s Rob, Vinay and Ceredig – the team I’ve known off and on since 2010 (OK, it was George and Sean that I knew initially, but still…). It’s the team with a very supportive Facebook group that I’ve made extensive use of, and helped to take part in.

Edit to add: The support is a huge issue. Whether it is exam board support (the coursework consultancy is a great idea) or community support – having other centres nearby with the same questions and the opportunity to moderate both NEA and internal assessments is invaluable.

The new course is an iteration of the old one. I’m very familiar with the old one and have largely enjoyed it. The content has been ramped up here, as with elsewhere. Still no negative numbers here (unlike Edexcel), and not much that I’ve seen here and not elsewhere.

The NEA allows you a choice of 3 tasks each year, the only course to have this. So the students can choose the task that suits them best, or you can choose for them (more likely). The NEA also allows intranet access. This is implicit rather than explicit but I’m sure I’ve heard from Rob or Ceredig that this would be acceptable (within reason, of course). No Internet, but see above for comments on the rampant cheating that this might help to alleviate.

The NEA mark scheme award 20 / 40 (50%) of the marks for programming, and the rest for analysis, design, testing, refinement and evaluation. The highest ratio of doing to writing about doing that I’ve seen yet.

The NEA tasks are broken down in a similar way to the AQA offering, providing a little more clarity than the Edexcel vagueness but still with freedom to explore at the top end.

Cons:

Edited: It’s OCR. Which might lull you (or me) into a false sense of doing what we have previously. For old hands like me who’ve been teaching the OCR spec since 2010 it is possible I will slip into teaching the same content – which would be a very bad thing as there is a definite shift.

OCR’s is the only spec that explicitly references SQL. I didn’t see anything in the sample exam papers but it’s definitely there in the specification. I don’t mind SQL, but given the choice of enforcing that students learn another set of syntax versus not doing so, I’m tempted to leave that until KS5.

The NEA mark scheme only offers 12 / 40 marks (30%) of the marks for programming. The lowest ratio of doing to writing about doing that I’ve seen.

Yes, that’s a contradiction to what I said above. There are 8 extra marks for ‘development’. Current OCR centres will be familiar with this section. It is kind of about doing and kind of about writing. And I didn’t see this quite as explicitly in the other specs. Going back it is there in the AQA spec (approx. half of the programming marks) – although there it is more about the summative description of what you have created rather than a narrative of how it was created. The Edexcel spec also focuses on the completed product with only a reference to screenshots demonstrating debugging skills.

In my experience the documenting of the development process is one of the most frustrating elements for the students. They want to be on and doing, not stopping to write it up as they go. And this leads to frustration and also to lost marks when actually they are very good programmers and problem solvers.

The chunked / scaffolded NEA tasks are not quite as chunked as the AQA sample assessment task I don’t think, though still clearer than Edexcel.

Conclusion:

NEA (only 20% of outcome but a significant investment of time and enthusiasm) offers the most freedom and a fair amount of support as well as a familiar structure for the writeup.

The exam structure and presentation is largely familiar which is reassuring, but I would need to keep making sure I’m delivering the right content for the new spec and not the old one.

 

Overall Decision?

This is harder than I thought it would be.

I like the OCR team. I’m familiar with the OCR way of doing things and I like having the flexibility of choosing from 3 tasks each year. I like bullet-pointed, chunked programming tasks. I don’t need the Internet.

 

OCR still has the development section of NEA, which ought to be fine but is a drag. With AQA I can reduce the impact of that, keep my bullet points and still have freedom over how much the students can access online resources. Edexcel have made the NEA task description too vague and locked the rules down very tightly.

Exam wise I think I prefer Edexcel. Negative numbers aren’t so tricky and that was the only difference in theory I could find on a quick scan. The exam papers look relatively friendly and the pseudocode wasn’t as off-putting as AQA.

For me, it’s down to Edexcel vs OCR. With OCR I get more support and feel more comfortable with what is expected. With Edexcel I think there is the potential for a more prosperous pair of exams, though I do worry about the NEA.

 

Further thoughts

This new spec is going to be hard. Noticeably harder than the current spec. 2d arrays, subroutines (functions, procedures and libraries), specific network protocols to learn and more focus on writing accurate algorithms. I’m glad the NEA has dropped a lot, and this means we’ll have more time for exploration and learning instead of assessing and assessing, but next year is going to be a real challenge.

Controlled Assessment Strategies

The Passage of Time

Originally uploaded by ToniVC

How many teachers are spending at least some of their time planning schemes of work, resources and other bits and bobs for the next academic year?

How many of those teachers will sit in 1 hour chunks (or some other arbitrary time period), during which time they start, get stuff done, save and then (whether finished or not) put everything away and start a new task for another hour.

My Y10 computing students have just finished a 20 hour controlled assessment task. 2 or 3 times a week they’ve come into my classroom, logged in, grabbed their controlled assessment booklets and ploughed on with a task. 55 minutes later they get told to stop, save, put it away until the next time – worst case scenario due to their timetable, in 6 days time.

Don’t get me wrong. There are interventions, tips, hints, guidance and all sorts of other things going on – I’m not just leaving them to fend for themselves. What seems ludicrous, though, is that sometimes the students are just building up a head of steam, getting into the zone, getting themselves into the task, when they get the call to save, log off an pack up until next time. That process of getting yourself into the right frame of mind, and into the right headspace to be able to visualise the problems and challenges you’re dealing with, must sap the students’ productivity.

When I have a big job to do, I’ll sit down and do it. It might take me 90 minutes instead of an hour. It might take me 4 or 5 hours. It might take me a few days or even weeks, but it’s very unlikely that I’ll be using pre-defined, 1 hour chunks to get it done. It’s unnatural to do so.

I’m seriously considering booking my students off timetable in order to complete their controlled assessment in larger chunks. Initially I thought about 5 days, Monday to Friday. That would give me time to do some bits that don’t count towards the time and would give the students time to really get themselves into the task.

The downsides? There’s little time to reflect on the problem. A task that is completed over 3 or 4 weeks has time to permeate, and gives the students time to research and reflect. It may be that, with such a number of subjects (plus all that, not inconsiderable stuff going on outside the classroom), most students aren’t really doing this anyway (my homework tracking book would back that up), I’m not sure.

There’s the logistics of covering my timetable for a whole week, as well as the effect on other subjects of losing their students for a week. If every department did that then it might (MIGHT) be chaos. Or, it might work out really well. Certainly the Geography department take students out for 3 days of fieldwork around this time each year. Why not computing students as well?

Another issue is that time to help students identify issues and to spend some time away from controlled assessment working on them. The OCR programming tasks, for example, come in 3 parts – each progressively more difficult. I’ll usually stop and do a week or two of revision on a particular concept before starting each task, to make the students are fully prepared. So maybe I do 1 day for task 1, 1.5 days for task 2 and 2 days for taks 3, spread out over 3 weeks?

The issue raised here also raises the question of whether the model is flawed for the rest of the year. Carousels, where students learn about Subject A for a half term, Subject B for a half term and then Subject C for a half term, with longer lessons (perhaps a half-day at a time) might be more natural and would allow for longer project-based activities to be explored more effectively. But that might be a post for another day.

Has anyone tried the more intensive approach for longer controlled assessment tasks? Any feedback from those who’ve been there would be much appreciated.

Sssh… it’s a secret!



Whisper

Originally uploaded by daniel_pfund

I had a tutorial lesson today. Or maybe citizenship. Or PHSE. You get the jist…

The aim of the lesson was for the students to understand the concept of budgetting. In addition to the central aim I wanted them to appreciate what their finances might be like in the future and to compare their expectations with harsh reality.

So, printing off a semi-random budgetting sheet found on letting agent’s website we proceeded to fill it in as a class. It took the full hour.

We discussed the cost of renting vs buying, shopping at different types of supermarkets, repayments on loans for different standards of car and, with some degree of shock for the students, the difference between gross and net salaries!

At the end of it we packed up, threw the paper in the bin and went to lunch. I didn’t formally assess their work, they didn’t produce evidence of having completed tasks or showing progression in their knowledge and understanding. I would have been graded as Requires Improvement, or probably Inadequate.

And yet, I’m absolutely certain that EVERY student in that class learned something. They might not remember the figures, but they were surprised by how inaccurate their preconceptions about incomes and expenditures were, and they bought into the lesson really well.

I could have built in more activities – learning checkpoints, scaffolding, differentiated resources and mini-plenaries. And in many cases those tools are incredibly useful. But every once in a while I like to just spend the full lesson exploring something and not necessarily weighing the pig every 10 minutes to see if it’s gotten fatter.

But I’m in the middle of my appraisal, so sssh… it’s a secret! 😉

Assessment and Feedback in ICT

Marking

Originally uploaded by Pkabz

Apologies for the lack of posts recently, but real life has been taking over of late.

Thankfully, I was emailed today asking about how I deal with assessment at KS3 so I can kill two birds with one stone.

The email wasn’t so much what or how I assess, but how do I communicate this with the students and how do they respond to it. In many subjects a stuck in sheet at the front of the book serves to maintain a persistent and consistent platform for feedback and responses – but in ICT lessons we don’t use exercise books, and I’m loathe to start just for that reason.

We could always give students pieces of paper, or have them filed in the room, but this seems similarly arbitrary and far from ideal.

We did try using Moodle for a good few years, with a course set aside just for assessed pieces of work to be uploaded and feedback given. It required one upload assignment for each assessed unit and while the feedback was persistent (students could always go back and look at it) it was still very unidirectional.

Since about the middle of last year we’ve been using the Moodle Dialogue Module. While I was loathe to start adding non-core modules because of the hassles involved in upgrading further down the line, the functionality really couldn’t be found any other way.

Installation and setup is simple, although it’s virtually essential to be using groups*. I find it easiest to get the students to initiate the dialogue (you need to be enrolled as a teacher for the students to see you) although you can start a dialogue with an entire group at a time.

Both sides can write messages and upload files and the conversation is private between you and the student. This way the student can upload their work with a brief self-assessment, you can leave detailed feedback and they can respond. Every 3 or 4 lessons we bring the students back to the dialogue and look at what their specific targets are and can measure their own progress.

We’re also in the process of designing some large display boards with level descriptors so students can refer to these as they go.

It’s not perfect, and one of the bugbears is that impatient students will hit the submit button 3 or 4 times, creating 3 or 4 entries that can’t be edited or removed after the 30 minute grace period.

On the whole, though, it’s working very well and in the whole discussions and working parties on assessment and feedback our system has been praised by SLT – so it can’t be that bad!

* Top tip: Set up the groups before you enrol the students and give each group a unique enrolment key. Put a different enrolment key on the course and when students sign up with their class’ enrolment key they automatically appear in the right group.

The latest bandwagon, or something better?

What’s the joke

Originally uploaded by theirhistory

I’ve been kicking around an idea for a few months now.

Actually, no, that’s a lie. I’ve been following some other people kicking around an idea for a few months now and I’ve been feeling like this has the makings of a very good idea. Possibly.

Chris Allen (@infernaldepart) and Brian Sharland (@sharland) keep posting messages about Digital Badges, and a blog post by Dave Stacey (@davestacey) has really filled me with enthusiasm (and a bit of awe at the scope of his pedagogical vision). The basic principle is quite simple; instead of assessing each piece of work as Level 4a, Level 5c or whatever you use the Scouting model of awarding badges as a way of recognising achievement.

This has a real advantage for me that it’s not about me looking at each piece of work and grading it at one of several levels, it’s not about hawkishly passing judgement on the student every time. The emphasis is instead focused on rewarding work, and specifically in rewarding progress.

At Beaver Scouts, my daughter recently earned her ‘1 night away’ badge. Does this represent any learning objectives? No. Does it represent an achievement? Absolutely. It won’t be long before she earns her ‘5 nights away’ badge, then 10, 20… I think my son should be approaching 50 by now. Progress, achievement and rewards.

And actually, it’s not entirely dissimilar to the real world. When I have to write a particular document it is ultimately pass/fail. My SEF either comes back needing amendment or it is accepted, I get to keep my ‘administrative paperwork’ badge and everything else that comes along with it.

All this is helped by Mozilla’s Open Badges framework. The idea is that people can complete tasks and be awarded a digital badge for their efforts. They even have a virtual backpack on which you can proudly display your badges. And being open source (hence the Open Badges name), anyone can create their own badge, or set of badges.

There are two issues I still need to overcome, however, if this is going to work.

One, I need a way to embed the badges in the student mindset. They need to be displayed somewhere in a way that is “automagic and omnipresent”. It’s no use asking them to sew them on their jumpers and I don’t like the idea of students having to go to a third party website just to check their badges and to see the badges of others. One idea I’m looking at is a login script that will display the student’s badges on their desktop. I’m also hearing rumours of a potential plugin for with Edmodo and/or Moodle.

The second one is that in order for this to work, it needs to be consistent across the department, approved by SLT and mapped to levels so that we can still report using the existing systems. None of that is necessarily insurmountable, I just hope that my departmental colleagues are willing to give it a go.

I genuinely think that there is an opportunity here to provide feedback, offer recognition and to motivate students in a fundamentally different way.

Common misconceptions

I remember that as part of my PGCE I had to write a document to describe common misconceptions that students have in ICT lessons. I don’t remember what I wrote, to be honest, but I’m sure that I wrote it from my perspective and didn’t actually ask the kids – just based the document on my own observation.

Fast forward a couple of years and I started using the (free & excellent) Yacapaca KS3 assessments to baseline our Year 7 students as they entered the school. I used this to generate an apporximate level for each student – but didn’t delve too much into the specifics.

Fast forward another couple of years and I finally decided to delve a little deeper into the question-by-question analysis that is available. 20 minutes later and I had a list of common misconceptions based on the students’ answers, rather than my own ubsubstantiated observations. For every shocker listed below there was another question that was answered well and I haven’t included particularly difficult or unfair questions (such as how many managed to identify the correct HTML syntax for a mailto hyperlink).

Before I show you my findings, what does this mean? Well, first of all it is not meant as any kind of attack on our Y7 pupils or on the primary sector. It is what it is, and I am sure that my school is far from unique in our results. We are a very successful school with a largely affluent intake (>99% of students have an Internet-connected computer at home), so there is no shortage of access to equipment. I see it as an indication of the level we need to be aiming at as we start KS3.

Next time you tackle a spreadsheet unit, think about your language. If over 70% of the class don’t understand the term ‘profit’, then how can you expect them to create a formula to calculate it?

So here is the list:

18% of students thought the best way to copy a real photograph was to cut and paste.

54% of students thought that a table of data would be a better graphical aid than a colour coded diagram.

66% of students misunderstood the differences between cut & paste and copy & paste.

22% of students thought a database is a program for writing documents.

18% of students thought that a DTP package would be suitable for sending emails.

16% of students thought a joystick could be used to copy a photo onto a PC.

60% of students were unable to identify ‘fields’ and ‘records’ in a database table.

36% of students thought that a search engine would find files within their own workspace.

56% of students thought that the word count, spell checker or grammar checker would be a useful tool for improving the layout of a page. [Correct answer: print preview]

44% of students thought that a printer, scanner or speakers were required to access the Internet.

49% of students thought that a spreadsheet, word processor or database would be used to design a flyer [11%, 27%, 11%].

67% of students were unable to recognise a decision in a flowchart.

48% of students thought that ‘including lots of animation and music’ is an important factor in web design. [On a personal note: AAARRRGGGHHH!!!!!!]

Only 22% of students thought that most important way to ensure that a business website will be useful would be to obtain a list of requirements from the staff.

Only 41% of students correctly identified that a web designer would need an Internet-connected PC. 25% suggested a high quality printer, 21% suggested a plotter and 13% suggested datalogging equipment.

68% of students thought that a flashing light, buzzer or monitor was an input device.

72% of students failed to identify that the primary benefit of a financial model was to try different prices.

38% of students thought that too much text or too many hyperlinks would significantly slow a website down, rather than too much multimedia content.

50% of students failed to identify which fields to search in a database table.

79% of students failed to recognise formulae as a spreadsheet tool used to make predictions.

76% of students were unable to identify the definition of ‘profit’.

52% of students failed to identify a database as the best tool for storing details of inventory.

25% of students thought that the total costs and income would need to be calculated BEFORE being entered into a spreadsheet model.