The Devil is in the Details
Developing for the Web, Now and Into the Future
Picture this scenario, an architect is tasked with drafting plans for a dozen completely identical buildings in a dozen locations across the globe using the local building contractors. This would be an enormous challenge, the plans would have to be very specific, down to the smallest detail as to ensure that variances in their interpretation are kept to a minimum, but the plans would also need to make allowances for the fact that each team of contractors and sub-contractors work a little differently. The architect would need to consider, among other things, the variations in local language, skill-sets, practiced building methods, and materials available. It’s possible that the architect would need to down-spec the plans to the lowest common denominator, either that, or ensure that specialist sub-contractors who are capable of accomplishing the more ambitious tasks are made available to every site.
In many ways, the process of developing for the web poses the same challenges as the scenario above. Replace the building plans for computer code, contractors for the different web-browsers (Internet Explorer, Firefox, Safari, Chrome, etc) and email systems (Outlook, Mail, Hotmail, Gmail, Yahoo!Mail, etc), specialist contractors for browser-plugins, all working together to build identical copies of webpages and email newsletters instead of identical buildings. In both cases, to get things ‘just right’, the devil is in the details. Just as each building contractor has their own ideas on how to get the job done, so too do the various web-browsers and email systems, each using their own code ‘rendering engine’ that has its own quirky way of doing things. This is not a good thing if the goal is consistency in the final product. To make matters worse each version of a web-browser, although inheriting its core from its predecessor, is likely to have it own sub-set of quirks. For example, pass the same identical set of instructions to Internet Explorer 6, 7 and 8 and you can expect slightly different outputs from each. It’s quite common to see elements in the layout shift anywhere from 1 to 20 pixels because of the way margins and padding are interpreted, but in other cases elements may overlap making them completely unusable. Considering that all three versions are in use today and have user-bases in the millions, web-developers cannot simply ignore one browser version because it is being ‘difficult’.
In an ideal world where life is simple, the code would work universally, whether being viewed on a Windows PC running Firefox, on iPhone running Safari, or a Sony PS3 running its own browser, maximising the potential audience. But in the real-world, this process obviously has to be managed. The process of ‘tuning’ the code to work with different browsers, mail systems, operating systems, or devices can be extremely tedious and time consuming. A fix here may result in a breakdown somewhere else, which can only be found by testing and retesting. It’s not unusual to set upwards of 20 revisions of an email newsletter during testing. Cost constraints force web-developers to draw a line at a select number of devices or environments where the code has to be fully-functional. The target audience, budget and time allowances all factor into this decision. In some cases, designs have to be ‘massaged’ and ‘moulded’ or even dumbed down to work with the chosen environments, other times additional technologies are employed to get certain design elements to work (e.g. the ability to use non-websafe fonts), and in some extreme cases developers need to drop support entirely for certain browsers, like the ageing Internet Explorer 6 which is missing the ability to understand the advanced instructions that make today’s websites so engaging. This is done so that the finite resources can be focused towards improving the experience for the majority.
There is however, hope on the horizon, in particular with HTML 5 and Webkit technologies. While the World Wide Web Consortium (W3C) has been working to develop and promote open specifications for the internet since the 90s, these standards were and are still based on the somewhat limited HTML 4.0 language. Almost all of the websites that we see today use HTML 4.0 at their core, but rely on layers of more modern technologies to enhance the experience. Some of these layers are standards-based, such as JavaScript and Css, however many are not (e.g. Adobe Flash and Microsoft Silverlight). The hope is that the next revision of HTML (which is currently in a working draft state) will be powerful enough in itself to deliver the feature rich content of today and tomorrow, allowing us to do away with extra layers of non-standards-based technologies. Early tests are looking very promising and experts are speculating the widespread adoption of HTML 5 by 2012.
WebKit, on the other hand, is an open-source HTML 5 compatible rendering engine already in use by Apple and Google to drive their web-browsers and other technologies. If WebKit were to be used by all browsers and mail systems, the output of each would in theory be identical. The use of a single web rendering engine makes perfect sense, taking the ‘quirkiness’ out of development, and WebKit is currently looking like the most likely candidate to achieve this.
Potentially, these developments will significantly benefit web-developers and content generators (i.e. our clients at Jack in the box) and also the end users. Developers will have less concerns about scaling back ideas to work with the lowest common denominator as is currently the case, and the time and cost of ‘tuning’ the code for cross-compatibility would be saved, or could be used towards creating richer and more engaging experiences for the user. Until such a time, developers will continue to work hard, striving to bring a unified user experience to as many users as budgets afford.