What is effective assessment in a MOOC?

Le Penseur. The Thinker.

Le Penseur. The Thinker.
By: Sigfrid Lundberg.

Recently I completed my first Massive Open Online Course (MOOC). Which one is not relevant to this post, as I don’t intend this to be a review of the specific course. After completing the course, I started to reflect on the way assessment was used. These reflections led me to the conclusion that I don’t believe it was particularly effective. This conclusion was quickly followed by another, I do not know what the alternatives are.

Let me explain…

The course was structured around six weeks of video based content. Each week also included three different types of assessment; a wiki, a discussion forum and a quiz. I will explore my reflections on each of these assessment types below.

Wikis: A search for more information

The wiki was used to encourage the students to search out more information related to the content presented that week. For example, additional references, the organisations mentioned, the reports mentioned, or definitions of key terms.

The Wiki used Markdown, a popular lightweight markup language. One that I’ve used off an on for a number of years. Indeed, my honours thesis was written using Markdown, and transformed into a PDF using the MultiMarkdown application.

The trouble with the wikis is that they became a dumping ground for copy and pasted content. I consider Markdown to be an easy to understand and easy to use markup language. I will acknowledge that I may be biased in that opinion, especially as I’m very familiar with it. The basics are still very easy to understand and use.

The wiki contained many formatting errors. Such as missing headings, links that were formatted incorrectly, and other irritating little things. A few weeks into the course there was an attempt to at least have the students alphabetise their entries. Unfortunately, by then the bad habits had already formed.

The biggest problem was that with 100’s of students all contributing content a wiki quickly became an impenetrable wall of text. Any grains of good information were quickly lost in the piles of chaff.

Discussion forums: A prompt for reflection

The intention, as I perceived it, of the discussion forums was to act as a prompt for reflection and hopefully critical thinking. Each week the lead-in for the forum was a question and some more information for consideration. There was also an entreaty to reply to at least one of the other students posts.

Once again the primary problem was one of scale. With 100’s of students posting in the forum the list of posts quickly became exceptionally long. As a student it was dispiriting to try to come up with something new and original. It was also led to feelings that it was very unlikely that anyone would read your individual post.

Quizzes: Checking your level of knowledge

Each week included a quiz of 10 to 15 questions. They were predominantly multiple choice or true / false questions. Each question had provision for a set number of attempts. This allowed you, as a student, to take a guess if you weren’t sure. In the knowledge that you had at least one more attempt. Except, of course, the true / false questions which only allowed one attempt (for obvious reasons).

The primary issue for me with the quizzes is that I’m not convinced that multiple choice questions are effective in evaluating student knowledge. There were no essay, or even short answer questions. As such the quiz becomes a simple mechanism to see if a student could recall facts.

The question in my mind is, does the ability to recall facts show understanding? Or does it just reflect on your ability to remember things?

Lots of questions, no answers…

The end result of my reflections is that I have lots of questions, and no real answers. I believe that the root of the problem is scale. That is, a MOOC is massive and as such is designed to cater to as many students as possible. A MOOC is also free, or at the very least available at low cost. Therefore any assessment primarily needs to:

  • Be available online, after all that is one of the words in the acronym.
  • Cater to as large a number of students as possible.
  • Work with as little investment in time and money as possible.

The end result, at least in my experience, is assessment that is not very effective. I am willing to preemptively agree that a sample size of one is not in any way a sufficient sample size. It has given me food for thought. Perhaps a MOOC isn’t meant to provide in-depth understanding. Is it only meant to provide an overview of a topic, as a means of wetting a students appetite for further investigation and learning. Or if I was to be extremely cynical, is the purpose of a MOO more of a marketing exercise intended as a way to try to recruit more students to the sponsoring institution?

These reflections have got me asking many questions, and I would very much like to explore them further. I’ll also be thinking about them before I sign up for my next MOOC.

Adelaide International Kite Festival

Below are a number of photos that I took a few months ago, at the Adelaide International Kite Festival at Semaphore. The photos were taken on the Easter long weekend this year, and I’ve only found the time to sift through the 300 photos I took to find the best ones.

Gundam G-Self HG Model

Below are some pictures of the second Gundam model that I have ever assembled, the Gundam G-Self. My thanks to rurisu_hoshino for introducing me to the world of Gunpla and Gundam models.

On Safari with Inline SVG

"Pillar of Darkness Expedition: 1913"

“Pillar of Darkness Expedition: 1913″. Uploaded by: davidd.

At the start of this month I announced on Twitter that I had made the transition from working in the client development team at NetSpot, to working in the product development team at Moodlerooms. Both companies are owned by Blackboard. I still work on adding features, and fixing bugs, with software built on Moodle. My focus is now on features that are part of our core products, rather than client driven customisations. I also still work on defect resolution, also known as fixing software bugs.

One of the more interesting defects I’ve needed to resolve in the past few weeks, related to the new Snap theme, the use of inline Scalable Vector Graphics (SVG), and the Safari web browser.

Using SVG is important as it allows our designers and developers to use icons and other page elements that scale to different display sizes and resolutions, without any loss of quality. Additionally as SVG is a text based format, a SVG file will often be smaller than a graphics file. Especially if there is a need to create multiple images for different screen resolutions.

As SVG is a text format, it is possible to embed the SVG code directly into the HTML markup of the page. This also helps speed up page download times as the HTML of the page, and the SVG code required to render elements of the page, can be downloaded in the same request.

The defect that I needed to work on relates to the way that the Safari web browser renders inline SVG. After a fair amount of debugging, and googling, I discovered that Safari, at least at the time of writing, cannot render inline SVG if the SVG code comes after the place in the page where it is used.

For example this will not work in Safari (abbreviated code for readability):

The inline SVG code starts with the svg tag on line 10 in the code listing above. This is where the SVG content is defined and contains the code necessary to render the icon.

The icon is meant to be displayed by the svg tag starting on line 4. The intent of the xlink:href attribute is to define where the SVG code is located in the page as defined by the id attribute on the g tag on line 11.

In my testing the icons would display correct in all browsers except Safari. To get Safari to render the content correctly, I needed to move the SVG code so that it appeared in the page before it was used. For example this will work in Safari.

All I needed to was move the SVG from the end of the section element to the start of the section element. Now Safari is able to following the id in the xlink:href attribute and display the appropriate icon.

The takeaway thought from all of this is, that while it may be tempting to put the SVG code to the bottom of our HTML file, or at the bottom of the enclosing element, this will mean that in the Safari browser the page won’t render correctly. It is preferable to put the SVG code at the start of the main enclosing element. This will ensure cross browser compatibility.

 

Shy Birds Breakfasting in North Adelaide

On my way into work earlier this month, I cam across these two shy birds having breakfast in the parklands.

Unfortunately they were too shy for any close up photos.

Hercules is a Supporter of the Irish Cricket Team

I was walking through the parklands on my way to work at Blackboard, formerly NetSpot, in North Adelaide when I came across Hercules. I’m assuming he is a supporter of the Irish Cricket Team. My friend Mark Drechsler tells me that they were in town on the weekend for a cricket match.

A Green Torrens Lake

On many mornings, my walk to work at Blackboard, formerly NetSpot, in North Adelaide is very pleasant. It is a nice way to start the day, and is around a 3km walk from the bus stop in the CBD where I get off the bus. The only mornings where it isn’t pleasant is if it is really hot, or really cold, or it is raining and I’ve forgotten my umbrella.

On this particular morning in late February the Torrens Lake, part of the river Torrens, was quite green. There were a significant numbers of algal blooms on this particular morning. It was so striking that I had to stop and take a few quick photos.

Algal Bloom in the Torrens Lake

Algal Bloom in the Torrens Lake. By: techxplorer

To get a sense of the scale of the bloom that I could see, I used the panorama function of my venerable iPhone 4S.

Panorama of the green Torrens Lake. By: techxplorer

Panorama of the green Torrens Lake. By: techxplorer

No, I will not upload my address book. So stop asking!

"Privacy" by: _Bunn_

“Privacy” by: _Bunn_

Those people who follow me on Twitter would have seen the following conversation last night that I had with a member of the LinkedIn Customer Service team.

Now that I’ve succumbed to the lure of LinkedIn, I’ve been exploring the service in order to make the most of it. I think I should work to extract the most benefit out of sharing my personal and private information with a faceless corporation.

Screen capture of the twitter conversation.

Screen capture of the Twitter conversation.

One thing that I have noticed is that LinkedIn is very persistent in asking me to upload my address book. Either directly, or by giving it access to my Google Account. This is something that I am not prepared to do, and being asked to do it repeatedly is getting very tiresome.

There are two main reasons why I am not willing to upload my address book.

The first, I alluded to in my earlier post. My address book contains contact information on people who I have some form of relationship with. I consider the information that I have about others to be private, and I won’t share it with outside services. I simply don’t think it is ethical or moral to do so.

The second is that my address book as information in it about my doctor, and other personal information. I don’t want a faceless corporation mining that data for some sort of nefarious purpose. Yes, I know I’m being a little hyperbolic, but that’s simply how I feel.

What surprised me, as a software engineer always thinking about ways to make the software I write easier to use, is that it isn’t possible to disable this “feature”. As you’ll note in the above conversation, Kat says that “There’s no way to permanently opt-out of this feature”.

Upon reflection it isn’t that surprising. After all, the data about me is what is most important to social media services, not necessarily the experience I have as a user of their service. Especially if I’m not using their product, and I am the product that they’re selling.

I’ve succumbed to the lure of LinkedIn

"Linkedin Chocolates" by Nan Palmero

“Linkedin Chocolates” by Nan Palmero

Late last week I succumbed to the lure of LinkedIn and created an account. You can see my profile here.

In the past, I have been adamant that I wouldn’t have a LinkedIn account, and I’d limit my “social networking” to Twitter.  My Twitter profile is available here. So what changed.

Firstly, I don’t think any social network is inherently bad, nor are they inherently good. They’re tools to be used, just like the Internet is. The benefits of using them need to be weighed against the costs. For while having a LinkedIn account is free, that’s only an example of a monetary cost.

The main cost to me is privacy. By uploading my employment history into a third party service, over which I have no control, I’ve lost a bit of my privacy. Not only does LinkedIn share information with people in my network, I also makes select portions of it available to the public. As such it’s a piece of my private that  I’m never going to get back.

In the past, this cost was greater than any perceived benefit that I had. What has changed is that the benefit now outweighs the cost. It became apparent in the past week that I’ve reached a point in my career where a LinkedIn profile has become expected. If I want to continue to progress in my career, I need to have a LinkedIn profile.

That is to say, the lack of a profile to me was a greater cost than the cost to my privacy. So at this point creating a profile was a logical decision.

If I am asked wether someone should have a profile or not, my response has not changed. It depends on the individuals career goals, objectives, and the costs that they’re willing to incur. I still don’t think that having a LinkedIn profile, or any other social media profile for that matter, should be “required” and something that should entered into with critical thought.

Now that I have a profile, if I’ve worked with you in the past, feel free to connect with me. For no matter how many times LinkedIn prompts me, I am not willing to upload my email address book so that they can potentially find other people for me to connect with. This is because my address book contains personal information about other people, and I respect their privacy and won’t be sharing their details without their consent.

Saving the Page of Failed Behat Tests in Moodle

"Agapostemon splendens, f, head, anne arundel county, md_2014-07-09-13.37.57 ZS PMax" by  USGS Bee Inventory and Monitoring Lab

“Agapostemon splendens, f, head, anne arundel county, md_2014-07-09-13.37.57 ZS PMax” by USGS Bee Inventory and Monitoring Lab

I will be the first one to admit that I was skeptical when we introduced automated Behat tests for our Moodle codebase at NetSpot, now Blackboard. For those that may not be familiar with Behat, it is a tool that allows for behaviour driven development (BDD).

Conceptually BDD is all about testing the behaviours of software. It emulates the actions that a user undertakes when using an application, and ensures that the results of undertaking that action are expected.

In our case, when using Behat, it does this by interacting with a special Moodle instance, making requests for web pages, as if the Behat script was a real user. It then analyses the pages returned by Moodle to ensure they contain specified strings. More information about how Moodle is integrated with Behat is available on the Behat integration page in the Moodle Developer Documentation wiki.

One aspect of the integration that used to cause significant frustration for me, was that it was difficult to work out why a test failed. This is because it wasn’t possible to see the contents of the page that had resulted in the error.

Or at least, that is what I thought.

When configuring the Behat integration I used options like this:

The other day I stumbled across a fourth option which has proven really useful. Now I configure the Behat integration like this:

This fourth line configures some code that ships with the Moodle Behat integration that is triggered when a Behat test fails. What it does is save the content of the HTML page that was returned when the test failed. This is a vital piece of information to work out why a test failed.

I am now an advocate for BDD and have used the Behat integration extensively for a new activity that I’m developing for one of our clients. It’s funny how one small configuration option can have such a huge impact on the way in which you use, and think about, a tool.