HTML5 and the Future of the Web

Essay for Monday 17 December 2012

Location – HomeEssays17 December 2012

Monday 17 December 2012 marks an important day in the history of the internet. The World Wide Web Consortium (W3C) has officially deemed HTML5 feature-complete. The language is now free to move towards final specification status, which will most likely occur over the next couple of years.

But in plain language, what does this mean? And what can HTML5 do that wasn’t possible before?

Technically, not very much. The reason why HTML5 is such a landmark is that it attempts to replace a myriad of disparate older technologies – some open, some proprietary – with a single set of universal open standards covering virtually every aspect of modern online publishing. Four of the most significant new features are support for more advanced document structural abilities, native support for typefaces, support for dynamic media (video and audio), and support for a programmable drawing board named Canvas.

The Origin of HTML

Before we begin considering HTML5, it’ll be a good idea to understand what came before it. Here is a somewhat accurate history of the Hypertext Markup Language.

Tim Berners-Lee devised HTML between 1989 and 1990 as a method for sharing documents at the Europoean research laboratory Cern. Based on the Standard Generalized Markup Language (SGML), HTML offered a relatively simple way to write text with body copy, hierarchical headings, and bulleted and numbered lists. Although these features were supported by many word processors of the time, HTML had one key difference: a form of cross-referencing called hyperlinking. This critical feature gave authors the ability to connect one thing to another.

Hyperlinking was HTML’s magical ingredient. Originally proposed by Vannevar Bush in his 1945 essay ‘As We May Think’, hyperlinks were put into practical application in the late 1960s by Doug Engelbart and his Augmentation Research Centre team at the Stanford Research Institute. Apple’s release of HyperCard by Bill Atkinson in 1987 made hyperlinked media available to the public on a wide scale. HTML was simply lucky to be in the right place at the right time – and it made the hyperlink an indispensable reality of everyday life as the internet took off.

In order to share HTML documents, one places them on an internet server that supports the Hypertext Terminal Protocol (HTTP). These documents can then be viewed using an application that can read the HTML file format. And that’s where comes in: it was Berners-Lee’s application – or web browser – for reading HTML documents. And if you happen to have a spare NeXTcube sitting about, you should still be able to run the software today.

Released to the academic community in 1991, HTML and the World Wide Web percolated for a few years until 1993–1995 when browser applications supporting images like NCSA Mosaic, Netscape and OmniWeb became available. The release of Microsoft Windows 95, which came bundled with Internet Explorer, was the point at which the wider public was introduced to the World Wide Web.

Versions of HTML

Like applications, file formats also go through version releases – HTML is no exception to this rule. The main reason for building a new version of a file format is to add features that extend its usefulness. In programming circles, file format version releases are normally called ‘specifications’.

HTML5 is the fourth formal specification of the file format – this is what preceded it:


The first release of HTML didn’t exist as an official specification, but rather as a building block made by Tim Berners-Lee upon which others built. It evolved haphazardly over five years, and included new features like initial support for inline images (GIF and JPEG files). Every web browser had a different interpretation of HTML, which led to wildly-varying experiences on the early World Wide Web.


An official specification called HTML 2.0 was published in late 1995. It formalized HTML as a standard file format, and established a core set of features. Amongst these was the ability to build forms into an HTML document, which allowed for data collection. This addition was one of the catalysts that helped the internet to move from an academic playground to a commercial enterprise, complete with online shopping.


Released in early 1997, HTML 3.2 brought support for tables, a function already found in many word processors. It’s at this point that the first glimmers of the modern World Wide Web started to appear, as HTML was joined by a number of complimentary standards. 1997 also marked the acceleration of competition between browser manufacturers to outdo one another.

The most important complimentary standard was called Cascading Style Sheets (CSS) – it gave authors the ability to control the appearance of rendered HTML in a web browser. Another standard was the open Portable Network Graphic (PNG) file format, which was created in response to Unisys’ patent claims on the popular GIF file format.

Support for frames (later removed from HTML5) was also added. This allowed a single page to be broken up into multiple sections, and was the conceptual origin of the headers, footers and sidebars seen on many websites today. Another new feature called the ‘div’ appeared – we’ll return to it shortly. Finally, wider support for JavaScript in contemporary web browsers gave programmers the ability to make HTML pages interactive.


Published in stages between late 1997 and late 1999, HTML 4 and its related standards (CSS2, JavaScript, &c.) formalized the state and behaviour of the World Wide Web as we know it today. HTML 4 took many of the custom features developed for specific web browsers and combined them into its official specification for everyone to use.

Perhaps the one of the most important features was the ‘div’ element, which is an abbreviation of word ‘division’. Although it was introduced in HTML 3.2, the div became useful when combined with other HTML 4 features, complimentary standards like CSS and JavaScript, and better support by web browsers. In a nutshell, the div added support for independent objects and layers in an HTML file.

Previously, HTML files were like standard word processor documents: text started at the top of a window, and flowed to the end in a single continuous piece. A div combined with a little CSS coding allowed web developers to pull content out of this continuous text flow, and visually position it anywhere within the window as a separate object. Adding a bit of JavaScript allowed for basic animation and interactivity.

This advance changed HTML greatly. Before HTML 4, CSS and JavaScript were woven together, producing content for the World Wide Web was mainly in the hands of dedicated programmers and developers. Combined with reasonable feature support in Internet Explorer 4 and Netscape 4, these new standards made it possible for creative professionals to start designing website layouts with more confidence and control.

A Lack of Consistency

Part of the success of any file format is whether it is widely supported. For example, Aldus’ desire to support the placement of graphics in their PageMaker page layout application led to the creation of the TIFF format in 1986 – a standard that is still one of the most popular image file types in professional design today.

Another example is Adobe’s enormously successful PDF file format. Based upon PostScript – a language that exists for the purpose of describing the appearance of things – PDF is now an ISO standard in use everywhere. Macintosh users will even find it baked into OS X: open anything in any application, choose ‘Print’ from the ‘File’ menu, and there is a ‘PDF’ button in the lower-lefthand corner of the Print dialogue box.

The key to any file format is having applications that support it. By default, most people view PDFs using Adobe’s Reader software. However, there are many other applications that can view and create PDF files, such as Apple’s Preview and X Windows’ Xpdf. In the case of HTML, it’s the responsibility of the web browser to render files for viewing on-screen.

During the mid 1990s, two web browsers emerged as popular choices: Netscape’s Navigator (also called Communicator), and Microsoft’s Internet Explorer. A game of one-upmanship followed, which is now known as the Browser Wars. Microsoft, Netscape and several other parties started to extend HTML in various ways, in an attempt to gain a competitive advantage over each other.

One of the earliest extensions to HTML was ‘img’, which gave users the ability to display images directly in the flow of a document. This feature was added to NCSA Mosaic in 1993 by Marc Andreessen, who went on to found Netscape. Before img, HTML lacked the ability to handle graphics elegantly; afterwards, the web was a much better place. However, it meant that anyone who wanted to view newly-amended HTML with inline graphics had to use a browser that understood what ‘img’ meant.

And this is the paradox of HTML’s success. Every time somebody wanted to add a new feature to the World Wide Web, they did so by extending HTML yet again and releasing a new version of their web browser – leaving everyone else to catch up or suffer the consequences. The result: every browser would render the same piece of HTML code differently, supporting some features and neglecting others.

A Decade of Stagnation

By 2000, HTML had reached some sense of stability. Microsoft’s dominance in operating systems had guaranteed Internet Explorer’s overwhelming share of the web browser market, whilst Netscape was experiencing troubles after a buyout by America Online. Even though Internet Explorer was the most-used browser, it and other web browsers still rendered HTML to screen in different ways.

This meant that the designers and programmers who built websites had to do so in a manner that took this into account. For years, a common statement on many sites was ‘This website works best at a resolution of XXX by XXX using XXX-browser version X.0 and higher.’ Other websites were programmed in a manner that people were routed to different versions of the same site that were specifically compatible with their browser.

The consequences of fragmented HTML support meant different things for different people. Designers and developers were frustrated by the excessive time and effort wasted getting a single site to work on different browsers. The company who owned the site were upset by how much it cost to maintain. And some visitors to the site would be driven away because they were using an unsupported browser.

After Microsoft released Internet Explorer 6 in 2001, five years elapsed before version 7 was completed. Since the browser had no real competition at the time, there no urgency to develop it much further. Its support for HTML 4.01 and partial support of Cascading Style Sheets (CSS) set the hallmark upon which many websites were produced, to the general dissatisfaction of the designers and developers building them.

An Interlude with XML

HTML has always enjoyed a somewhat incestuous relationship with its parent language, SGML. During the late 1990s, an effort was made to produce a simplified version of SGML, which became known as the eXtensible Markup Language (XML).

The advantage of XML over HTML was that XML allowed users to define their own DTD (Document Type Definition) – a file that spells out the structure and order of data in its associated XML files. HTML lacked this capability, since its DTD structure had already been predefined – and could only be changed by the release of a new specification. If you’re familiar with the basics of HTML coding, this is the reason why ‘head’ always has to appear in the code before ‘body’ (and never the other way around).

Like HTML, the XML specification was defined by the World Wide Web Consortium. In an attempt to standardize the wildly-varying HTML implementations, the W3C issued a new standard known as XHTML. Similar to HTML 4.01 in functionality, XHTML imposed more rigorous standards on how code was written and structured.

The intent of XHTML was to produce webpages that would render correctly in existing HTML browsers, but also be compatible with newer XML browsers. Since XML allowed for greater flexibility, it meant that XHTML would be the means to prepare the existing World Wide Web for the future. In theory, the process of future standards development would follow this path:

[1] The XHTML specification is released;

[2] Over the course of time, developers make the minor changes to update their websites from HTML to XHTML;

[3] New versions of XML web browsers are released to replace the existing HTML-only web browsers;

[4] At some stage, people access an XHTML-only web using XML browsers – not knowing that the change had ever happened;

[5] Future advances could then occur at leisure, thanks to XML technology and standards.

And everyone lives – at least in theory – happily ever after.

Reality Gets in the Way

Theory can be simple, elegant and beautiful. Reality is often an ugly, complex mess. At the time, Internet Explorer 6 had no real competition to speak of. So Microsoft did almost nothing to support XHTML and parts of the existing HTML and CSS specifications. And although many developers upgraded their HTML code to XHTML, there wasn’t any practical reason to do so under the circumstances.

In the early 2000s, the W3C set up a working group to create XHTML 2. The new specification was designed to be a complete replacement for XHTML 1 and HTML 4 – meaning that there would be no backwards compatibility with the older standards.

The idea of a new universal standard with no support for the existing specifications was controversial. Some people who worked on web standards saw this move as the last straw, and abandoned their support for future releases of XHTML. A variety of interested parties including Ian Hickson formed the Web Hypertext Application Technology Working Group (WHATWG) in 2004, intent on coming up with an alternative standard.

PNG & CSS to the Rescue

In the meantime, several other web browsers appeared on the scene. The remnants of Netscape were formed into the Mozilla Foundation; the group released Firefox as a spiritual successor to Navigator. Apple adapted KDE’s HTML rendering engine into WebKit, which became the basis of Safari in 2003 and Google’s Chrome in 2008. Opera continued producing their own browser, which has retained a reputation as one of the most faithful interpretations of the HTML standards. And Omni continued work on its NeXTstep-based OmniWeb browser by releasing it for Mac OS X.

Internet Explorer accounted for 90–95 % of all web browser traffic at its height around 2003 and 2004. However, its most current version at the time (6.0) had some significant flaws. Firstly, its reputation for security was poor. Secondly, it only partially supported Cascading Style Sheets – making it difficult for designers to build websites that looked the way they wanted. Thirdly, its support for the popular PNG graphic format was lacklustre at best.

At first, web developers worked within the bounds of Internet Explorer 6’s capabilities. Then some developers began relying on workarounds and hacks to get the application to behave more in line with HTML and CSS standards. Finally, developers started turning to the other web browser manufacturers who were releasing applications that were more standards-compliant than Internet Explorer. Mac OS X users started using Safari; Linux, Unix and Windows users turned primarily to Firefox.

Microsoft’s release of new versions of Internet Explorer generally coincided with new releases of their operating system. Version 7 appeared alongside Windows Vista in 2006; version 8 was released in 2009 a few months before Windows 7. These two releases did little to resolve support for HTML standards. Internet Explorer 9 (2011) was the first concerted effort by Microsoft to bring their web browser into closer compliance, and version 10’s recent release with Windows 8 makes further advances.

However, this did not stop a slow but steady exodus to other web browsers over the past decade. It is now estimated that less than one third of people online use Internet Explorer today. And in a weird twist of fate, Microsoft now has an active campaign to kill off use of Internet Explorer 6 worldwide. This is a laudable step in the company’s dedication to future HTML standards compliance.

Personal Experience with Internet Explorer 6

I delight in new forms of technology. And the best way of learning anything new is through trial and error. With the emergence of HTML 4 and CSS2, I decided that it was time to improve my then-antiquated HTML coding skills by building a new version of my professional website.

As with any project, I decided to set a few limitations. The most important one was to only use open standards like HTML 4, CSS2, PNG and JavaScript. At the time, a company called Forgent Networks claimed that they owned patents that covered JPEG – the most popular graphic file format on the internet. The controversy persuaded me to take a look at the open PNG graphic format instead, which had some interesting features such as support for semi-transparency.

2003 saw the release of Apple’s Safari, and Mozilla (the predecessor to Firefox) was sufficiently stable at the time to allow for experimentation. As always, OmniWeb and Opera were available – and all four web browsers had good support for HTML 4, CSS2, PNG and JavaScript. The combination of these technologies and browsers would be the basis for my experiment in online layout and animation.

Some years before, I had taught myself to use After Effects – Adobe’s dynamic composition application, which I still like to call ‘calculus in motion’. After Effects’ online analogue was a little application called LiveMotion. Introduced by Adobe as a form of competition to Macromedia’s Flash, LiveMotion lasted only a few years before being discontinued. Nevertheless, my experiments with animation in After Effects and LiveMotion set the basis for my website experiment.

Adobe’s GoLive web development software included support for a form of animation known as Dynamic HTML. The first support for Dynamic HTML appeared with the release of Microsoft Internet Explorer 4 in late 1997, and was reasonably well implemented in other browsers by 2000. My idea was to compose a site layout that consisted of independent objects that moved in time and space by responding to interaction from users. As objects moved around, semitransparent graphics would overlap – producing interesting interactive effects.

Since the open PNG format supported semitransparency and GoLive’s animation timeline was relatively easy to program, everything went well. And most importantly, the HTML code rendered accurately within Mozilla, OmniWeb, Opera and Safari. When I showed the result to a few colleagues, they were convinced that I had programmed the entire website using Macromedia Flash.

And then I tested the code in Internet Explorer 5 and 6 on Windows. The Dynamic HTML animated correctly, but all of the graphics looked terrible. It was then that I discovered that Internet Explorer didn’t have stellar support for transparent PNG graphic files – particularly those with PNG-24 semitransparency. Since my website design depended upon semitransparent visual effects and animation, I was forced to abandon my original plans for a tamer Dynamic HTML design. A modified version of this site remains online, including a splash page which is perhaps the last Flash animation on the internet built using LiveMotion.

PNG transparency support was just one of the things Internet Explorer 6 couldn’t do. Although PNG was officially released in 1996, it took until 2011 and the release of Internet Explorer 9 for Microsoft to fully support the file format. And because more than 90 % of users were accessing the World Wide Web in 2003 with the application, problems like the one that I experienced were commonplace amongst web developers. Some tried to produce workaround fixes. Others got tired of waiting and started concentrating on the future evolution of the HTML standards.

HTML5 Emerges

As mentioned earlier, a split had occurred in the World Wide Web Consortium – the organization responsible for defining HTML and its related standards. One group actively promoted a more XML-integrated approach with XHTML 2; dissenters formed another informal group called WHATWG in 2004.

The proponents of XHTML 2 encouraged the introduction of a comprehensive new specification that broke with previous HTML and XHTML standards. Support would be added for a variety of important features, including universal access and complex document structures. But it meant that all existing HTML and XHTML code and websites would have to be reprogrammed to be compatible.

To those opposed to XHTML 2, the proposed standard sounded like the following scene. Imagine the President of the United States, the Chief Justice of the Supreme Court, and leaders from the House of Representatives and Senate appearing together to make an announcement. The President steps forward to the podium and says:

‘We’ve decided to amend the Constitution. Congress has passed the law defining an official national language. I’ve signed the bill, the Supreme Court has declared it constitutional, and thirty-eight states have approved the amendment. That language is Esperanto, and everyone is required to starting using it in a year’s time. We all think that it’s in the nation’s best interests. And before I forget – English and all other languages will be outlawed from that point on.’

The various supporters of WHATWG preferred a more subtle approach. Instead of building a new specification that was effectively incompatible with older standards, they advocated for a more evolutionary approach. That meant retaining nominal support for previous versions of HTML, cleaning up the language’s grammar and syntax, deprecating features that were no longer useful, and adding support for needed new features.

The two groups worked on their respective specifications for a number of years. By 2007, WHATWG’s backwards-compatible ‘Web Applications 1.0’ standard had gained favour, and members of the W3C decided to use it as the basis for future specifications. Later that year, it emerged under a new name: HTML5.

The New Features

HTML5 isn’t an isolated specification. It’s just a convenient name for a set of complimentary standards that includes HTML5 itself, CSS3, PNG, SVG, MathML, WOFF and others. There are simply too many new features to discuss in this essay, so I’ll mention a few significant ones.

Document Structure

The HTML5 specification itself contains a series of new elements that allow authors to define more complex document structures. In the past, one was limited to defining text as body copy (p), bulleted and numbered lists (ul & ol), hierarchical headings (h1–h6), plus a few other choices. HTML5 adds support for defining every type of content on a webpage, plus the relationships between them. New markers include headers, footers, articles, sections, navigation areas, sidebars, figures with associated captions, and many more.

Video & Audio

Two critically important new HTML5 elements are ‘video’ and ‘audio’. Since day one, HTML supported text; support for graphics followed shortly thereafter in the form of JPEG and GIF. These two static forms of content were acceptable for most web users in the first few years. When the world wide web became more popular and commercialized, it became apparent that support for the dynamic counterparts of text and graphics – audio and video – should follow. What followed instead was a complex mess.

By the time HTML 4 had become a formal specification in late 1997, the standard still had no way of defining or supporting video and audio. People had already started resorting to using third party and proprietary solutions to fill the void: QuickTime, RealAudio Player, Video for Windows, Shockwave, Flash, and others. As time dragged on into the 2000s, nothing was done to amend HTML to natively support dynamic media file formats. Ultimately, Macromedia’s Flash became the most popular solution, but only operated if you had the plugin installed in your web browser.

In the end, staff from Opera Software proposed the inclusion of the ‘video’ and ‘audio’ elements as an integral part of the evolving HTML5 standard. And the problem was resolved – at least in theory. The reality is somewhat different: there still is no universal agreement on which specific multimedia file formats will be supported. Opinion is currently split between the patent-protected MPEG formats and the open source Ogg and WebM formats, and no resolution is expected anytime soon.

Canvas (& SVG)

Another important new HTML5 element is one innocently named ‘canvas’. Canvas gives developers the ability to add programmable drawing boards to their HTML code. Unlike a static graphic file, canvas elements can be scripted to build graphics on the fly. Simple and procedural in nature, canvas is meant to offer a degree of interactivity that would otherwise be achieved using Flash. For developers who require a more object-oriented approach to illustration, the XML-based Scalable Vector Graphic (SVG) file format is also supported natively by HTML5-compatible web browsers.


HTML5 also adds support for something that designers have requested for years: the ability to embed fonts in webpages. Ever since the development of Adobe PostScript thirty years ago, creative professionals have enjoyed using a wide variety of digital typefaces in their layouts and designs. This benefit however, did not extend to the World Wide Web.

For years, HTML has allowed web developers to specify which fonts should be used when rendering code in a browser. Unfortunately, this didn’t really mean anything, since the only fonts a web browser could render were the ones installed on the user’s computer. And since every computer has a different set of fonts installed, the choice of universally-available fonts was limited to a small selection. Consequently, many websites were designed using this core set of fonts, which included Times Roman, Arial, Courier, Georgia, Verdana and Lucida.

This limited set of typefaces was enough to drive most designers crazy, and so many workarounds to the problem appeared. A favourite trick was to build text elements in Photoshop using a specific font, save them as bitmap graphics (GIF, PNG or JPEG), and them place the graphic into an HTML file. Others would use Flash files, which supported font embedding. Some designers avoided HTML altogether by posting PDF files of their print layouts online.

The problem with these font embedding workarounds is that they destroyed any semblance of the HTML file format as a container for structured content. When a website used a graphic to represent text set in a specific font (i.e. for the purpose of corporate identity), the bitmapped text lost its semantic meaning. And the associated HTML file’s function to act as a set of meaningful data was reduced.

Fortunately, HTML5 and CSS3 offer a working solution in the form of ‘@font-face’ and the Web Open Font Format (WOFF). The @font-face command was introduced with CSS2, but its practicality was hampered by lack of browser support. Interestingly, Internet Explorer 4 supported font embedding in 1997; this was a moot point, because other browsers didn’t support the feature. Many type foundries also balked at the idea of anyone being able to download fonts embedded in a website. The development of the WOFF font format in 2009 provided a more stable basis for the use of fonts online.

Now that current versions of the most popular web browsers all support embedded WOFF fonts, designers can look towards a future of more diverse typography on the World Wide Web. Regrettably, there is still much to do in order to make HTML typography more controllable and precise – the lack of a universal typesetting engine across all HTML5-compliant browsers hampers designers’ ability to produce consistent layouts. But that’s a topic for another essay – font embedding is a significant advance to be celebrated.

Web Browser Support

There would be little point to having the new HTML5 standards if web browser support wasn’t readily available. Fortunately, this is not a problem.

During the development of HTML 4, it seemed like the specifications were initially trying to play catch-up with the web browsers. As different developers unilaterally added functions to their own browsers, the specification went through its own changes. After a stage, the browser authors decided that their software was working sufficiently well, and stopped adding support for newer specification features. As a result, HTML 4 was pretty well supported, but browser makers didn’t provide adequate support for Cascading Style Sheets and other standards. This left web developers and designers in the lurch for years.

This time around, it’s the browser makers who are trying to catch up to the HTML5 standards – there are two simple reasons for this. Firstly, no browser holds a majority of share of the market, and this makes the effort to be HTML5-compliant into a competition. Secondly – and perhaps more importantly – there is a general sense of agreement that it is in the best long term interests of everyone that a universal set of technical standards exists.

There are many web browsers available on a variety of platforms today. Each browser manufacturer makes claims on the HTML5-compatibility of their product, but these claims should be taken lightly. No single browser is in full compliance; instead they have been adding support for HTML5 features over the past few years. Now that HTML5 has been declared as effectively feature-complete, the browser manufacturers now have an established target towards which to move.

A few web browsers are worth mentioning. Maxthon is an interesting product worth consideration, because it uses a hybrid rendering engine to handle different websites and situations. Maxthon’s developers also pride themselves in claiming to be the most HTML5-compliant browser available today, but other browser developers may beg to disagree.

The two major competitors from the original browser wars of the late 1990s also have their own offerings. Microsoft continues its development of Internet Explorer. Version 9 was their official rededication to standards compliance; version 10 (which recently shipped with Windows 8) is a marked improvement. Netscape went through a series of name changes, and exists today in the form of Firefox. The Mozilla team’s efforts on Firefox and related technologies should be commended, given their support for esoteric and non-commercial platforms.

As always, Opera provides a compelling option for users. They have always advocated for open standards and close compliance, and their browser is an excellent example of this dedication.

Two web browsers share the WebKit rendering engine: Apple’s Safari and Google’s Chrome. Safari is the default in Mac OS X and iOS; Chrome enjoys the same status in the Android operating system. Given the enormous popularity of mobile devices over past few years, Chrome is now the most-used web browser today. Both Safari and Chrome share a common base and are highly HTML5-compliant, but are functionally distinct from one another.

And OmniWeb – one of the oldest web browsers still in development – is still available for those who enjoy a bit of nostalgia. Although it’s no longer a major concentration for development at the Omni Group, it’s still more HTML5-compliant than Internet Explorer 9. And some of its useful application features are still unique in the web browser market.

There are other web browsers available. With so many to choose from, I recommend that you choose a browser that is reasonably HTML5-compliant, and has the functional features that best meet your personal needs.

The Future of HTML

HTML5 marks a significant point in the development of the internet. After an initial flurry of improvements in the language during the late 1990s, HTML remained relatively static for more than a decade. The reforms and advances introduced with HTML5 pave a way towards a future where specific computers and their operating systems are less significant, and where they act more as appliances offering a window into a wide range of online services.

Personal computing and the internet have come a long way since their emergence in the late 1960s and early 1970s. The past four decades have changed civilization and literacy with a similar degree of impact as the printing press produced during the Renaissance. But technology doesn’t exist in a vacuum – it is influenced by society, and in turn influences society.

The development path of computing can be likened to that of the automobile. Karl Benz started experimenting with motors during the 1870s, and built what is widely considered to be the first motor car in 1885. Before Benz started his own research, there were many other engineers experimenting with forms of self-propelled carriages. But the motor vehicle as we know it today is really the product of later minds like Henry Ford, who introduced the Model T in 1908. This mass-produced convenience was the first practical opportunity that many people had to invest in their own motorized transportation.

It was only once the automobile was popular that many of the societal implications started occurring. Some people found themselves out of a job. The existing road system had to be updated to carry the greater burden of personal vehicles. Laws had to be passed to standardize appropriate driving behaviours and to determine who was competent to own a driver’s licence. And as the number of deaths from traffic accidents soared, new safety laws had to go into effect. These changes took decades, and were the result of much trial, error and circumstance.

Computing has a similar timeline. Digital computing emerged in the 1940s with work of Alan Turing and other mathematicians. The first useful commercial computers appeared in the 1950s. Inspired by an article he read in 1945, Doug Engelbart gave what is called the ‘Mother of All Demos’ in 1968 – the first public demonstration of many modern computing concepts like the file system, graphic user interface, screens, keyboards, mice, and collaborating with others over a network.

1969 saw the introduction of four critical elements of modern computing: the C programming language, the Unix operating system, the Arpanet network, and the Generalized Markup Language. Each of these is worth an essay in their own turn – and will be dealt with some time in the future.

What most people recognize as the start of the computer revolution occurred in the mid 1970s, as personal computers became available for public purchase. By the 1980s, many American families had purchased a home computer, and most workers came to depend on one in their professional lives. The advent of HTML and the World Wide Web in the 1990s provided the means to connect these independent computers together in a way that has radically changed civilization.

If we use the automobile metaphor, what does HTML5 represent? Perhaps the best way to understand it is that different websites are akin to different vehicle manufacturers. Each one offers a slightly different product, and new improvements and features are added with each new model release. People using the internet are the drivers, who choose their conveyance and the roads they want to travel.

HTML5 is like the agreed set of conventions stating that all Americans will drive on the righthand side of the road, and that a standard set of road signs and symbology will be used. There is still room for improvement, and that will come with subsequent releases of the HTML specification. However, HTML5 is an well-designed base from which to build.

Thus HTML5 becomes the universal platform – a set of rules and conventions that keeps the internet in reasonable working order. And all you’ll need to access the myriad of stuff out there are devices that can render HTML5-compliant code properly. Websites will act as front-end user interfaces for services and information, but be more powerful and capable than the sites that exist today. These sites won’t do everything, but will provide enough for most people. And that’s what counts.

The rise in popularity of cloud computing is a convenient coincidence. Imagine a future where all of your information is readily available anywhere – all you need to do is find a computer and log in to your account. The principles of cloud computing have long been around, in forms like Plan 9 (the successor to Unix). In the upcoming years, we should expect significant advances from a combination of cloud computing providing the function of online services, and HTML5 their form.

Future Considerations

As with anything that precipitates a major change in civilization, the advent of the internet has introduced some sociological and legal challenges that still need to be addressed. I anticipate that these challenges will be dealt largely in an organic trial-and-error manner, increasing in refinement with each new case that requires serious consideration.

Harking back to the automobile metaphor, it took decades for society and the law to catch up with the consequences of the motor vehicle. Only once millions of cars were crowding overburdened roads and the number of automotive deaths and accidents rose, were meaningful laws enacted to make driving safer and reduce accidents. Drivers had to be licensed, insured, and above a specific age. Vehicle manufacturers were required to add safety features to their cars, like seat beats, impact zones, and more recently airbags.

The internet has regularly been described as the Wild West: a laissez-faire land where anything goes. However, there are critical issues to be addressed, such as how to define rights online – and under whose jurisdiction. There are more than two centuries’ worth of rulings on American constitutional law interpreting individual rights in the real world – as more of life moves online, law will ultimately follow.

For example, consider the Fourth Amendment, which protects Americans from unreasonable search and seizure. When someone stores documents in their own home, there are defined legal procedures that have to followed before those documents can be seized. When the same documents are stored on a server in another location, how does the Fourth Amendment apply? Are they still the private property of the owner, the property of the server’s owner (for legal purposes), or are the files considered to be in the public realm?

Another example lies in the Sixth and Seventh Amendments. These two amendments guarantee the right to speedy and impartial application of law, and more importantly the right to trial by jury. It has become increasingly common for companies to require customers to sign contracts agreeing to settle all legal matters by arbitration. By agreeing to arbitration, an individual gives up their right to trial by jury. Many people sign such legal contracts without understanding this principle – and it is unknown whether they would do so if they were aware of the consequences. Since the arbitration clause appears in many end user licence agreements for software and online services, any future Supreme Court ruling on the constitutionality of the practice will have a significant effect on what happens online.

Given the slow, deliberate legal process in the United States, it’s reasonable to assume that online law will evolve gradually. But it will take considerable time – and it would be foolhardy to guess what the outcome will be in the long term.

A Bright Future

Civilization has a habit of progressing relentlessly – whether one likes it or not. Sometimes changes are influenced by politics; sometimes by religion; sometimes by culture; sometimes by war; many times by factors beyond the control of humanity. Science and technology in turn have played their part. And amongst the most important technologies ever built are literacy, publishing and communication.

The internet and its related technologies represent a profound change in the nature of civilization in ways that few people could have imagined a generation ago. As we continue to define the conventions of our virtual lives, HTML5 will be just another of the things that made it possible.

The success of the HTML standards lies in their relative obscurity. Few people will ever know about them, and even fewer people will understand them. But everyone will use them as they go through their everyday lives, without the slightest thought about HTML’s rôle in helping them get things done.

In a way, this is very reminiscent of the purpose of typeface design. You can’t change the shape of a letterform too much, because everyone will notice. A well-designed typeface on the hand, is noticed by almost nobody – its purpose for existence is to act as the medium for communication, and not to get in the way of it.

Every day, billions of people use keyboards to put their thoughts on-screen. Many use a font called Times Roman, oblivious to its origin and the two men responsible for creating it. And it doesn’t really matter who they are or why they did it – their work has become immortal in its own manner. The HTML5 standards follow in this great tradition.