It’s been around a month now since the classwork portion of our Master of Publishing degree wrapped up, and now that I’ve had some time away from the intensiveness that was the last few weeks of school it seems like a good time to talk about the Media/Tech Project.
In the fall semester we devoted six weeks of our lives to starting fictitious publishing companies complete with a detailed list of books. But what to do in the second semester of a publishing degree?
In the spring, the program moved away from books to focus on media and technology (in the past, the program focused more heavily on magazines). As the publishing industry changes, it has become clear that in order to for publishers to remain relevant, they must understand how technology impacts all aspects of their business. It’s not enough to focus on print and traditional forms of publishing. We have to look ahead to what publishing could become. And so, our class became Media/Tech Project guinea pigs.
While we started off the semester working on the Media project and finished with the Tech project, for all intents and purposes they were the same thing—the second was simply an extension of the first, which meant the project ran the entire course of the semester.
On the second day of class after the holiday break, we were divided into our groups and told to form media companies based on direction we pulled out of a hat. One group was assigned B2B (they pivoted and become NFP2NFP instead), another group got arts and crafts, and the final group pulled politics. From there, the groups were tasked with building a media entity from the ground up.
How do you build a brand? How do you become financially viable? How do you grow sustainably? What gap in the market are you meeting? What will your product be?
In our groups, we began to answer these questions and sketch out our business plans. Nearly every week, groups met with instructors to pitch their updated businesses, which evolved as we completed more research and received more feedback. At the beginning of the project, it was stressed that our start-ups would need to be agile, and that became our mantra as the semester progressed and the work piled up.
And every week, we were given additional pieces to complete. Brand guidelines. Marketing and advertising plans. Financials. Websites. Podcasts. The list went on.
Halfway through the project we were divided into additional groups with specific skills (this is where the Tech project came in). The Web Development, Analytics, Media Production, and Ebook teams provided focused support to their media entities following a series of mini lectures aimed at providing them with hands-on skills. Of course, all students were invited to attend the other teams’ lessons.
And just like the fall book project, we made it through to the end of the semester, presenting our launch-ready companies to panels of industry guests. Some of the most rewarding feedback we received was that our final companies were even pitch-worthy to potential buyers. And some of the best presentations I’ve ever seen were on that final day as well: one group even “recorded” the beginning of a podcast as part of their presentation.
While the Media/Tech project will undoubtedly look very different by next spring as our field continues to evolve and the skills that are in demand change, what I hope future classes also take away from the project is the importance of being flexible and ability to find creative solutions.
“Generative art” is a blanket term for any creative work produced in part through programmatic or algorithmic means. “Playful generative art” makes use of highly technical disciplines—computer programming, statistics, graphic design, and artificial intelligence—to produce chat bots, digital poetry, visual art, and even computer-generated “novels.” These pieces may be motivated by serious social or political issues, but the expressions are decidedly unserious, often short-lived or quickly composed. Creators working in this medium are rarely artists first—as programmers, designers, game developers, and linguists, they use the tools of their trade in unexpected and delightful ways. Generative art also has much to teach us about issues at the intersection of ethics and technology: what is the role of the artist in a human/machine collaboration; what is our responsibility when we design programs that talk with real people; how do we curate and study ephemeral digital works? Digital artists, writers, technologists, and anyone interested in media studies are invited to attend.
Design a usable website. This is undoubtedly a lofty goal, but one that is increasingly crucial to business success in publishing. Web usability is really a form of mind reading. First ask: what do users want and need to do on a website, and then follow those answers toward designing content that presents information in a way that guides users to the appropriate end goals. The International Organization for Standardization (1998) defines usability under the following metrics:
Efficiency: the level of resource consumed in performing tasks
Effectiveness: the ability of users to complete tasks using the technology and the quality of output of those tasks
Satisfaction: users’ subjective satisfaction with using the technology
These focus areas provide a simple place to start when evaluating any website. If a business goal of a certain company is to have a visitor sign up for an email newsletter, the web design must address the process the user undertakes to do this.
1. Is it efficient: does it exclude unnecessary steps like entering a phone number or other irrelevant information?
2. Is it effective: once they complete the online form, have they actually been signed up for a newsletter they want to receive?
3. Are they satisfied: does the user feel like they accomplished a task?
Asking these questions about efficiency, effectiveness, and satisfaction of experiences is the first approach to usability. Let’s take a look at each of these factors in more detail.
Efficiency: Better Make it Quick
Soothsayers and divining rods were once used to understand the world and human behaviour, but thankfully, modern science provides other more reliable solutions. Neuroscientists have come up with different ways of actually reading the human mind. The most common mind reading device in the field of web usability research is eye-tracking, which involves a camera following the eye as it moves around a display (science 1, soothsaying 0). Sirjana Nahal (2011) measured first impressions of websites using one of these eye tracking programs, and Nahal reported the following conclusions. The first is that users spent less time on websites deemed “unfavorable” (Table 4.8). Perhaps this is not a shocking revelation, but it underlines an important principle of web design and usability. People know what they are looking for, and if a website does not offer it, they will go elsewhere (and quickly, no more than the time it takes to hit the back button). The conclusion that users spend less time on “unfavorable” sites also reinforces the importance of connecting people with the content they are looking for. This idea will be further explored in the following section on effectiveness in web designused commercial water slides for sale.
Nahal also looked at the ways users prefer to view web content, and these preferences are broken down by design categories. The following table outlines those conclusions.
The information in the above table is in keeping with basic principles of design that apply to print materials like magazines, newspapers, and others. Where those print technologies have traditionally had barriers to access that require a relatively sophisticated knowledge of print production to make a viable product, the online world is a democratized, open-source environment that encourages access for all. This generalization is certainly debatable, but at the same time, how useable a website is can be directly correlated with how much attention is paid to these principles of design.
Another resource from Dahal (2011) is an assessment of how much time users fixated on different areas of a simple website during the visit. That information is summarized in the table below.
There are two things worth paying particular attention to in this information. The first is just how little time is spent on any one element of the webpage. 6.48 seconds is the most time a website can expect to hold the attention of an average visitor. 6.48 seconds. Given this minuscule window, it is crucial that websites are built with absolute efficiency in mind.
Effectiveness: Help Me Help You
Steve Krug offers a very succinct guide to best practices for web design in his 2006 book, Don’t Make Me Think: A Common Sense Approach to Web Usability (2nd Edition). The underlying argument Krug makes is that users will not do things on a website that take extra mental effort. Krug offers that users like obvious, mindless choices. A big part of what makes some choices more obvious than others is how they are labeled, and how the navigation of the site is laid out. Krug (2006) argues that the lack of physicality on the internet makes a webpage’s navigation system absolutely crucial to a user’s experience.
Website navigation should:
Help us find whatever it is we’re looking for
Tell us where we are
Give us something to hold on to
Tell us what is here
Tell us how to use the site
Give us confidence in the people who build the site
(adapted from Krug, 2006, p. 59-60)
With so many important tasks placed on the shoulders of navigation, a great amount of attention should be paid to how the elements of navigation (menus, sections, and utilities as a start) are designed and communicated. The application of conventions that communicate physical space and direct user actions is a major factor in how effective a website is from a usability standpoint.
Krug’s model also suggests that users scan websites instead of reading them. He compares them to the billboards we pass on the highway at 100km per hour. If it the information on the site can’t be read at that high speed, it is not an effective communication tool. One way to achieve quick and effective readability is to reduce the number of words on the page to focus user attention on exactly what you want them to do.
Krug describes how users interact with instructions on webpages: “The main thing you need to know about instructions is that no one is going to read them—at least not until after repeated attempts at ‘muddling through’ have failed. And even then, if the instructions are wordy, the odds of users finding the information they need is pretty low” (Krug, 2006, 42). Anyone who has tried to sift through an online help or FAQ page (here is an example of a wordy instruction page from the SFU Library) knows that this is absolutely true. It is a lightning-fast scan of the material, a quick attempt to click around and see if you can intuit your way out of your particular issue, and then a jump back to the help page for another nugget of information to try. Krug’s emphasis on the speed in which users can access the information they need mirrors the findings of Dahal, and many other usability experts and researchers. Milliseconds will dictate whether or not a person is going to use a website to do a task.
Satisfaction: Ahh. That’s the Stuff
The subtitle of Seth Godin’s 1999 book Permission Marketing: Turning Strangers Into Friends And Friends Into Customers has become almost cliche in the internet marketing canon. The principles laid out in Godin’s book still hold, and point to a fundamental shift in marketing that came about because of how the internet changed how we talk to each other. Godin argues that in order to make a sale online, a company must ask permission using accepted web practices. If a business is serious about making an impact on their bottom line through a website (this impact is not restricted to sales of goods, and should be thought of any way that an online presence can enhance customer experience), serious attention to design and web usability is a good place to start. Providing a satisfying customer experience is about more than just giving them the product they want. Now, more than ever, it is about getting people involved in a community (that lives online, primarily), asking them to participate in the community, and having them help build a brand reputation on behalf of the business. This ability to engage in and with a community should absolutely be considered when designing a usable website.
Research into the factors that contribute to user satisfaction on websites helps point the way toward what a business should do to keep their customers. Kincl and Strach (2012) studied user satisfaction on 44 different educational institutions’ information-based websites by documenting satisfaction levels before and after the use of the websites. The researchers found that content and navigation were key areas in determining overall satisfaction, and that “users perceive high-quality websites if they achieve what they visited the site for. This success in user activities is subconsciously reﬂected in website assessment” (Kincl and Strach, 2012, p. 654). In short, people are satisfied when the website they visit does what they expect it to do. A simple sentiment that is anything but simple to implement. Another interesting finding from this study is the fact that users care less about what the researchers term “trivial” data like the colour of the site than “non-trivial” data like the content (Krug and Strach, 2012). That is to say, an average user would still rate their satisfaction of an unpleasantly-coloured site highly if they found the information they needed. This serves as a reminder that while attention to the look of a website is certainly important, in the end, users want substantive content (and to be able to find it).
The 3 Most Important Things to Remember about Usability
If you were scanning this article like a billboard, this is where your eyes should stop scanning and start reading.
1. Your website needs to communicate really, really quickly. In under 7 seconds.
2. Your website needs to be easy to use. It should be obvious where a user should focus, and then what action they should take at each step (and there shouldn’t be many steps).
3. Your website needs to give a user exactly what they think they need. A website is a promise, and it is up to you to define that promise and then to deliver on it.
Dahal, Sirjana. 2011. “Eyes Don’t Lie: Understanding Users’ First Impressions on Website Design using Eye Tracking.” Master of Science, Missouri University of Science and Technology.
Garrett, Sandra K., Diana B. Horn, and Barrett S. Caldwell. 2004. “Modeling User Satisfaction, Frustration, and User Goal Website Compatibility.” Human Factors and Ergonomics Society Annual Meeting Proceedings 48 (13): 1508-1508.
Godin, Seth. 1999. Permission Marketing: Turning Strangers into Friends, and Friends into Customers. New York: Simon & Schuster.
Green, DT and JM Pearson. 2011. “Integrating Website Usability with the Electronic Commerce Acceptance Model.” BEHAVIOUR & INFORMATION TECHNOLOGY 30 (2): 181-199. doi:10.1080/01449291003793785.
International Organization for Standardization (ISO). 1998. Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs), Part 11: Guidance of Usability. Geneva, Switzerland.
Kincl, Tomas and Pavel Strach. 2012. “Measuring Website Quality: Asymmetric Effect of User Satisfaction.” Behaviour & Information Technology 31 (7): 647-657. doi:10.1080/0144929X.2010.526150.
Krug, Steve. 2006. Don’t make Me Think: A Common Sense Approach to Web Usability. Berkeley, Calif: New Riders.
Morris, Terry (Terry A. ). 2012. Basics of Web Design: HTML, XHTML & CSS3. Boston: Addison-Wesley.
Snider, Jean and Florence Martin. 2012. “Evaluating Web Usability.” Performance Improvement 51 (3): 30-40. doi:10.1002/pfi.21252.
Somaly Kim Wu and Donna Lanclos. 2011. “Re-Imagining the Users’ Experience.” Reference Services Review 39 (3): 369-389. doi:10.1108/00907321111161386.
Other things to consider and discuss:
There are many institutions that attempt to categorize the “Top Websites” in the world at any given time, and Alexa is one of them. In addition to statistics on the most visited pages, Alexa provides information on category-specific website usage. For 2012, the top websites under the category “Publishing” were:
This top ten list show varying degrees of attention to design and usability. The standouts are the two Wiley websites, which both have a clean look and a clear path for users to follow, and the Audible website, which does works well to present a product, give key information about the product, and direct the user to an action (to “Get Started” using the product). Audible is a subsidiary of another company that is very good at directing user flows in a publishing environment (Amazon, of course).
Keeping in mind that we are publishers, and not math people, what is an algorithm?
At first glance, the algorithm sounds like a concept out of a particularly frightening chapter of a calculus textbook, but there is no reason to fear the concept. In Kevin Slavin’s TED Talk, “How Algorithms Shape Our World”, he defines algorithms as “basically, the math that computers use to decide stuff”. This simple definition is an easy way to think about the algorithm, but what “stuff” are computers using to make the decisions?
Wikipedia summarizes algorithms as the following:
An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty),the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing “output”and terminating at a final ending state.
To make this a little easier to understand, think of an algorithm as a program that is capable of going through a huge pile of information and making sense of it. The logical output that comes from this process is defined by the user at the beginning, according to the things they need it to do, and the order and way in which it is asked to do those things.
Jeff Hunter (2011) provides this helpful list of what commonly used algorithms do:
Searching for a particular data item (or record).
Sorting the data. There are many ways to sort data. (Simple sorting, Advanced sorting)
Iterating through all the items in a data structure. (Visiting each item in turn so as to display it or perform some other action on these items)
Understanding the basic idea that an algorithm is a function that allows for categorization of large amounts of information, it is easy to see where an algorithm has value in the digital world. Computer information is no more or less than a big pile of data, and algorithms give us shortcuts to processing these enormous data sets.
A basic type of algorithm is a sorting algorithm, which can take a set of data (let’s say, 100 books of different sizes), and sort those books in a specific way (according to their weight, for example). The Computer Science Unplugged website for children gives a good example of how a couple different algorithms could work for a weight sorting task, time- or space-complex task. There are many other types of algorithms commonly used daily, such as the search algorithms that display Google results, and each in their own way is display results or classifications based on how the algorithm works with the data.
Ok, so besides basic sorting tasks, what can algorithms really do?
Searching, Prioritizing, and Providing Biased Content:
Very complex algorithms can accomplish an endlessly diverse number of tasks. The Google search function is one example that we are familiar with, and now that you know what a basic algorithm does, this makes sense. Google search is sorting through an enormous amount of information, and using various functions to come up with the “best” end product—the exact website you were looking for. Seomoz.org has a an entire section devoted to understanding the Google search algorithm, including an interesting timeline that tracks all of the changes to that algorithm beginning in 2000 and cataloguing each year’s changes up to the present. While some of the language on that site is technically advanced (and intimidating), it is interesting to see the huge number of changes every year (Seomoz estimates that Google changes its algorithm up to 500-600 times a year), and to guess at what those changes mean for how we receive content.
Another implication of algorithm technology that is related to Google searching is found on Facebook. There, algorithms determine what content appears on a News Feed based on what content the user has previously “liked”. Eli Pariser, president of MoveOn.org talks about the danger of this kind of filtering in another TED Talk that is also embedded below. Pariser argues that people have to be careful about letting algorithms decide what news they see based on their likes, because a healthy news diet consists of both the things they instinctively like (chosen with the gut), and the things that could enrich an understanding of the world by pushing people to discover things outside of a current sphere of knowledge. Pariser goes further to say that algorithms are taking the place of traditional news editors (who were human, of course). Where the human editor acts as a gatekeeper and guide to information based on what they know about the audience and what they think the audience needs to know about, the (current, as of his talk) algorithm is only making judgments based on the most superficial, instinctual, and hedonistic of our online habits.
Assessing physical space, stealing jobs from humans:
Algorithms are also used for measuring and accessing physical space by robotic machines. The Roomba vacuum cleaner is able to clean a room because of an algorithm that works out the dimensions of the room and then sends it to each part of the room, systematically. The 60 Minutes feature that is embedded at the end of this paper gives another example of this kind of algorithm. There, robots are programmed to pick up warehouse shelves and bring them to workers at the moment they need to access the materials to pack them. These two examples show that algorithms are capable of computing physical space, and then making an assessment of a complex set of data to make a certain “choice” about a desired outcome.
Algorithms in conflict: crashes, glitches. What happens when this stuff breaks down?
The most frightening thing about algorithms concerns their volatility, and their ability to “speak” to one and other and inform the decisions that another algorithm makes. In “How Algorithms Shape Our World”, Kevin Slavin talks about the potential harm that can be done when these algorithms work outside of human control when he references the Crash of 2:45, or the “Flash Crash” that happened on the U.S. stock market on May 6, 2010. The “black box trading” that algorithms execute, in conjunction with “High Frequency Trading” contributed to the second largest point swing, and the largest point decline, on the Dow Jones Industrial Average in history (Lauricella and McKay, 2010). On how and why this happened, Slavin (2011) explains that in these algorithms, “We’re writing things that we can no longer read. We’ve rendered something illegible”. The term “black box trading” highlights the fact that some of the code works behind a wall of understanding that even the people who wrote the original formula no longer have access to.
When it comes to algorithms that build on each other and change outside of human oversight, the cause for concern is slightly larger. There are many art forms that talk about this potential problem. A lot science fiction literature and pop culture deals with this fear we have about creating things that outpace humanity and then turn on it. Think of Isaac Asimov’s I, Robot, Cory Doctorow’s Down and Out in the Magic Kingdom, The Matrix, and Battlestar Galactica. The fact that these messages pervade popular culture (and have done so for hundreds of years) speaks to an understanding that humans might just be too smart for their own good. And while algorithms represent a truly fascinating and powerful tool, extreme caution is wise when implementing automatic systems, particularly when those systems control finances, social lives, access to information, or any other frightening prospect.
A basic understanding of what an algorithm does, of what it has the potential to do, and of who is controlling the technologies that rely on these equations to make decisions about our lives gives us the power to ask critical questions and make sure that humans are in control of the technologies created.
At least, that is the hope.
There are a number of fun web videos on the topic of Algorithms. Please enjoy those below, and feel free to share others you know about in the comments section here.
The TED Talk that started it all: How Algorithms Shape our World by Kevin Slavin.
60 Minutes Feature: “Are Robots Hurting Job Growth?”. Alarmist title aside, this is a video that shows some very cool uses of algorithms in the manufacturing and production sectors in the United States.
Eli Pariser’s TED Talk: Beware Online “Filter Bubbles”
Another TED Talk, this time about the algorithmic editing of the web and how that editing function affects the content we see online, and thus the reality of the internet we experience. Because these algorithms are now our editors, Pariser argues that we need to make the algorithms focus on a balanced news diet, including some junk food and some of vegetables.
At PopTech 2012 Jer Thorp gave a presentation on Big Data. This is a visually gorgeous look at different types of data being displayed in very interesting ways.
Thorpe looks at how that data trails (think of yourself as a data slug that leaves behind a trace of everything you do in an electronic form) we leave can be examined, visualized, and ultimately understood. He also breaks down the “architecture of discussion” by mapping Twitter conversations that happen around a New York Times article.
He also warns that data is the new oil, and that the fragmented microorganisms that compose oil are not dissimilar to the fragmented pieces of our souls that make up public data.
And finally, from The Onion: Are We Giving The Robots That Run Our Society Too Much Power? This is just one of my favourite robot-related videos of all time. My apologies, it doesn’t have embedding code, but it is worth clicking the link.
Green, Scott A., Mark Billinghurst, XiaoQi Chen, and G. J. Chase. “Human-robot collaboration: A literature review and augmented reality approach in design.” (2008).
Hunter, Jeff. “Introduction to Data Structures and Algorithms.” December 28, 2011. http://www.idevelopment.info/data/Programming/data_structures/overview/Data_Structures_Algorithms_Introduction.shtml Retrieved January 13, 2013.
Lauricella, Tom, and McKay, Peter A. “Dow Takes a Harrowing 1,010.14-Point Trip,” Online Wall Street Journal, May 7, 2010. Retrieved January 15, 2013.
Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You, Penguin Press (New York). (2011).
Meteoric change in publishing occurred over the last decade as e-books became both more widespread and more interactive. Many publishers delight over the ever-evolving abilities of e-books, adding more technological bells and whistles to further distinguish e-books from print books in hopes of increasing value-added from a consumer standpoint. While many adults embrace the convenience and adaptability of e-readers and tablets for their reading needs, the ubiquity of screens has given pause to many educators who are now faced with difficult decisions as to how to best implement screen-based technologies into their classrooms. To many teachers who see busy and exasperated parents frequently passing off their iPhones to their children in order to entertain them, more screen time seems to be the last thing their students need. This excess of screen time begs the question — do e-books belong in early elementary school classrooms?
While some recent studies illustrate e-books’ success over print books in their ability to attract young readers and increase their initial interest in reading, other studies reveal e-books result in poorer comprehension, more easily distracted students, and passive reading experiences for emerging readers. Yet other studies demonstrate e-books’ ability increase students’ early reading skills at a faster rate than traditional print books. With such conflicting data, it’s no wonder many schools are hesitant to invest in e-books. This report sifts through these contradicting studies to pinpoint ways in which teachers can use the right e-books to the benefit of their students, and how publishers can use these findings to create better content for e-books for children in early elementary school environments.
In order to clear the air around e-books in early literacy it is imperative to make clear distinctions between the vast varieties of e-books currently on the market. In Lisa Guernsey’s 2011 School Library Journal article “Are Ebooks Any Good?”, Jeremy Bruek, a leading researcher in children’s digital reading research who is developing a rating scale for e-books in regards to their educational value, argues that the name “e-book” is “too broad,” giving little indication to the vast difference between commercially developed enhanced e-books, unenhanced e-books, and enhanced e-books developed for educational purposes. So far in his studies of one hundred children’s e-books, Bruek has found only a few e-books suitable for educational purposes. Later on in this article Ben Bederson, co-director of the International Children’s Digital Library, gives a prime example of the multitudes of unsuitable e-books when he discusses his experience downloading a Toy Story e-book for his five-year-old daughter: “It was 25 percent book and 75 percent movie.”
These types of enhanced (or in this case, over-enhanced) e-books are the focus of the Joan Ganz Cooney Center’s QuickReport, which found that enhanced e-books were “less effective than the print and basic e-book in supporting the benefits of co-reading because it prompted more non-content related interactions.” (In this study “co-reading” indicates guided reading with an adult or an adult reading to a child. “Non-content related interactions” include displays of interest in the device, rather than the story). The study also found that children reading enhanced e-books “recalled significantly fewer narrative details than children who read the print version of the same story.” While this evidence is fairly damning, the study did find that both enhanced e-books and basic e-books were more enticing to emerging readers than their print counterparts.
The QuickReport demonstrates that while many enhanced e-books should be avoided in literacy-building activities, basic e-books were on par with print books for comprehension and content retention, yet they share enhanced e-books ability to excite emerging readers with a new, fresh reading experience; therefore, using basic e-books in teacher-led reading activities has the potential to marry the best that print and digital have to offer to emerging readers.
With the difference between e-books and enhanced e-books clearly illustrated, one more distinction begs to be made: the difference between commercially developed enhanced e-books and educationally developed enhanced e-books. Bruek worries that many companies running enhanced e-book subscriptions are “… putting money into something that isn’t sound from a pedagogical standpoint.” So what, if anything, makes an enhanced e-book suitable for emerging readers?
The answer to this question comes from a 2009 study by Ofra Korat, Adina Shamir, et al. entitled “Reading electronic and printed books with and without adult instruction: effects on emergent reading.” The researchers in this study examined the effects of enhanced e-book and print book reading on children’s emergent reading skills with and without adult instruction. In the study, 128 Israeli kindergarteners from low socio-economic status families were divided into four groups. The groups were assigned to read an e-book independently (EB), read an e-book with adult instruction (EBI), read a print book with instruction (PBI), or were given the traditional kindergarten curriculum as a control for the study. E-book groups read their e-books while working in pairs on desktop computers, rather than on e-readers. The researchers discovered that: “…the EBI group achieved greater progress in word reading and CAP (concepts about print) than all other groups. The EBI group also achieved greater progress in phonological awareness than the EB and the control groups.” These findings seem to completely contradict the Joan Ganz Cooney Center’s study; however, in their report, the researchers clearly define the type of enhanced e-book they used for the study:
“Emphasis was made on the size and font of the text (big and clear) and on the optimal amount of text which appears on each page. The text was highlighted congruently with the narrator’s reading (at the word level), in order to help children connect between the written and the spoken text and thus promote reading ability and CAP. Clicking on specific words enables listening to the sound of the words at the syllabic and sub-syllabic levels in order to promote the children’s phonological awareness.” (pg. 914)
The educationally developed enhanced e-book clearly attempts to mimic many of the cues and prompts that an adult would initiate in a co-reading environment. It prompts children to interact digitally with the text, but only to make connections or practice chunking words by their syllables in order to sound out full words. While these enhancements are a massive improvement over commercially developed enhanced e-books’ bells and whistles, the report indicates that educational enhanced e-books alone were not enough. Teacher instruction was the key to unlocking enhanced e-books’ potential to increase early literacy skills in emerging readers.
Publishers can take three things from these studies: 1) emerging readers are captivated and excited by digitally displayed books, 2) any enhanced content should be considered from a pedagogical standpoint, 3) e-books should be designed with both e-reader and desktop computer use in-mind.
Nearly all studies of emerging readers and e-books highlight the increased interest young readers have in e-books over print books. Unfortunately, many publishers are currently over-delivering interactive content and distracting young readers as a result. These same readers will still be enthusiastic about e-books with much fewer enhancements, and educators and parents will feel better about incorporating those e-books into co-reading activities. At the end of “Are Ebooks Any Good?” Julie Hume, a reading specialist in University City, Missouri, discusses her success with the online reading program TumbleBooks, a Toronto-based company that enhances commercial print books for educational e-book use. While TumbleBooks e-books do contain some music and animation, their main interactive feature is the option to have to story read aloud with corresponding highlighted text, or to read the story independently. To test out TumbleBooks Hume split her students into two groups: one group received her original curriculum of co-reading in small groups with her guidance and one group used the TumbleBooks program. After three months, the TumbleBooks group scored 23% higher than the group that received her regular instruction. Hume contributes their progress to the “strong model of fluency” that the TumbleBooks narrators provide; however, she also cautions that while these e-books are great for building students’ confidence, they shouldn’t replace print books for fear that students will begin to rely on having books read to them, rather than decoding the text on their own. Given this concern, it would make sense for publishers to develop enhanced e-books that have the option of having their enhancements “locked” in order to revert content back to basic e-book format. This would allow emerging readers who are excited by e-books to practice reading independently, without the temptation to revert back to having the text read to them.
It’s easy to say that publishers should consider e-books from a pedagogical standpoint, but in reality not many publishers have first-hand experience in early childhood education. Luckily, in 2009, Kathleen Roskos, Sarah Widman, and the aforementioned Jeremy Bruek published an investigative report of analytical tools for assessing the quality of e-book design that publishers could use as a guide for developing pedagogically sound enhanced e-books. “Investigating Analytic Tools for e-Book Design in Early Literacy Learning” examines three analytic tools and their capabilities to assess the effectiveness of various e-book designs taken from a sampling of books from multiple easily accessible online resources. While the purpose of the study was to observe which tool gave the researchers the best information about the quality of e-books, rather than to explicitly report what kinds of e-books are best for emerging readers, it does highlight the types of designs and calls to actions the researchers were concerned with. Factors studied included book handling, navigation, multimedia, contiguity, redundancy, coherence, personalization, paths of attention (look-read-search-read vs. look-look-click-read-listen to and look-listen), and comprehension over print processing (i.e. understanding the text over reading independently). Publishers should consider these factors when producing e-books while they wait for a definitive tool to be developed for assessing the quality of enhanced educational e-books.
The last recommendation for publishers – to develop e-books for desktop computers rather than touchscreen devices – at first seems counter-intuitive. The reality is that very few schools can afford tablets and e-readers, but 97% of U.S. classrooms in 2009 had at least one computer. Scholastic’s Kids & Family Reading Report, Fourth Edition notes that while e-reading across a variety of devices is on the rise, in 2012 children reported reading e-books on laptops or desktops at roughly the same rate as those who read on tablets or e-readers. Another way to look at the data is that 41% of children polled are reading e-books on non-touchscreen devices; therefore, publishers specializing in children’s e-books who want their product to be accessible to as many readers as possible should develop e-books that can be used with simple point-and-click enhancements rather than swipes, pinches, or graphics that are activated by tilts in device orientation that will only be useful on a tablet. Coincidentally, removing many of the enhancements created for e-book use on touchscreens also removes the same enhancements that result in distractions and decreases in comprehension and text awareness.
Educators and researchers are key partners for helping publishers develop enhanced e-books that will both delight emerging readers and improve their early literacy skills. The recent studies that pinpointed e-book enhancements’ shortcomings should be heeded by publishers who in turn should scale back on superfluous additions to text in favor of enhancements that support comprehension and retention, and encourage emerging readers to decode text and read independently.
Teachers should embrace basic e-books as a way to engage students in new literacy activities, as well as a way to teach them about developing good reading skills for use in a variety of text formats and circumstances. Educationally developed enhanced e-books should be viewed as an exciting new supplement to early literacy curriculums and should be used in conjunction with traditional print book activities to develop strong independent reading skills. With adult instruction and guidance, e-books can be introduced into classrooms to the benefit of early elementary school students.
ADDITIONAL REPORTS ON TECHNOLOGY IN THE CLASSROOM:
This presentation is an overview of the different electronic publishing options for books, including a breakdown of which devices support which file formats, and the relative investment of time and money needed to create each of the three main file formats (.pdf, .epub, and .azw).
Multi-format Publishing: So Many Formats, So Little Time