Augmented Environments are a sub-set of “Augmented Reality”. Traditional aumented reality has been around since the 1980s, and the majority of augmented reality research is still concerned with the same basic concept; the user of the augmented reality system wears some form of sensing device (such as a head-mounted camera, or GPS receiver) and some feedback device – typically a see-through head mounted display. As the user wanders around the real world, a computer (either over a wireless connection, or carried by the user) attaches extra information – the “augmentation” to objects in the environment, such that the line between the real world and the computer generated enhancement becomes blurred.
Whilst rendering objects in 3d and positioning them correctly in the field of view can give a reasonable impression of a unified reality, there are several issues which limit the effectiveness of wearable augmented reality:
Encumberance – The user has to carry both the sensing and feedback devices with them, and in most cases the computing power, too. In the past this has led to severe practicality problems for the user. Early systems could only be worn for about twenty minutes before either the batteries or the back of the user gave out! Of course, mobile technology has limited the more extreme encumberance problems. It is still, however, inconvenient if not uncomfortable in many cases to have to wear som much equipment.
Cost – Quite simply, for every user a complete set of kit is required. The costs can quickly mount…
Exclusivity – Not just in terms of cost. Anyone not wearing a set of AR kit has no access to the augmentations, and even if several users all have kit, there’s then the problem of getting them all to agree with each other about exactly where all the augmentations are.
Clearly, there are plenty of good things about AR – but just as clearly, for many applications and environments, this type of augmentation will never be practical. In the early nineties, with much involvement from Xerox, Pierre Wellner with the Digital Desk and Quentin Stafford-Fraser with the BrightBoard put together the first working prototypes of what are now termed Augmented Environments. In these systems, rather than the user being the augmenter, the environment that the user inhabits does the augmenting – more properly, the systems could be termed augmenting environments.
In a departure from wearable AR, Augmented Environments allow a user to enter the augmented area with no extra equipment, and to interact with a computer through the environment itself, often providing feedback in the form of a data projector. The benefits of this augmentation paradigm soon became apparent, and set off a constant stream of work – especially in the MIT media lab – which continues to today.
Much of the research in Augmented Environments is centred around the same issue: interpreting the user input. As the entire environment provides the interface, the range of input criteria are huge, and extremely difficult to process compared to a keyboard and mouse. Relatively few projects have stuck to using vision systems, where a camera provides all the input to the computer, instead resorting to everything from radio tagged objects to laser pointers to sticks with painted ends to simplify the problem. These all, in the opinion of the OpenIllusionist team, are valid for specific applications, but take away the biggest pull of augmented environments – the idea that you can waltz into one, and just start using it, without any extra equipment of any sort. As processing power has progressed over recent years, we’re now at the stage that a video input can be interpreted fast enough to make the input problem soluble… so why can’t you get an augmented desk for your home?
The problem is that most augmented environments work is still done inside academic research groups – and it’s pretty much all research work, rarely development. It’s not stable, it’s not portable, and – most fundamentally – it’s not available for actual developers to work on. The Studierstube and AR Toolkit projects have addressed this problem for wearable AR…. and now, OpenIllusionist is attempting to bring the capability to develop an augmented environment to anyone who wants it. Grab yourself a copy of the framework, a webcam or video camera, a monitor or (if you can find one) a projector and make magic!