The Coming Age of Computer Graphics and the Evolution of Language
Professor of Computer Science
Director, Games for Learning Institute
NYU Media Research Lab
Some time in the coming years - whether through ubiquitous projection, AR glasses, smart contact lenses, retinal implants or some technology as yet unknown - we will live in an eccescopic world, where everything we see around us will be augmented by computer graphics, including our own appearance. In a sense, we are just now starting to enter the Age of Computer Graphics.
As children are born into this brave new world, what will their experience be? Face to face communication, both in-person and over great distances, will become visually enhanced, and any tangible object can become an interface to digital information . Hand gestures will be able to produce visual artifacts.
After these things come to pass, how will future generations of children evolve natural language itself ? How might they think and speak differently about the world around them? What will life in such a world be like for those who are native born to it?
We will present some possibilities, and some suggestions for empirical ways to explore those possibilities now - without needing to wait for those smart contact lenses.
 Ishii, H., and Tangible Media Group, "Tangible Bits: Towards Seamless Interface between People, Bits, and Atoms," NTT Publishing Co., Ltd., Tokyo, Japan, June 2000 (ISBN4-7571-0053-3)
 Senghas, A., . and M. Coppola 2001 "Children creating language: How Nicaraguan Sign Language acquired a spatial grammar" Psychological Science, 12, 4: 323-328.
Ken Perlin, a professor in the Department of Computer Science at New York University, directs the NYU Games For Learning Institute, and is a participating faculty member in the NYU Media and Games Network (MAGNET). He was also founding director of the Media Research Laboratory and director of the NYU Center for Advanced Technology. His research interests include graphics, animation, augmented and mixed reality, user interfaces, science education and multimedia. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor's award for excellence in Science and Technology, the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation. He has served on the program committee of the AAAS, was general chair of the UIST2010 conference, and has been a featured artist at the Whitney Museum of American Art.
Dr. Perlin received his Ph.D. in Computer Science from New York University, and a B.A. in theoretical mathematics from Harvard University. Before working at NYU he was Head of Software Development at R/GREENBERG Associates in New York, NY. Prior to that he was the System Architect for computer generated animation at Mathematical Applications Group, Inc (MAGI).
Closing keynote (joint with ACM UIST Attendees)
Designing the User in User Interfaces
Director for Mixed Reality Research
Institute for Creative Technologies
USC School of Cinematic Arts Interactive Media Division
In the good old days, the human was here, the computer there, and a good living was to be made by designing ways to interface between the two. Now we find ourselves unthinkingly pinching to zoom in on a picture in a paper magazine. User interfaces are changing instinctual human behavior and instinctual human behavior is changing user interfaces. We point or look left in the "virtual" world just as we point or look left in the physical.
It is clear that nothing is clear anymore: the need for "interface" vanishes when the boundaries between the physical and the virtual disappear. We are at a watershed moment when to experience being human means to experience being machine. When there is not a user interface - it is just what you do. When instinct supplants mice and menus and the interface insinuates itself into the human psyche.
We are redefining and creating what it means to be human in this new physical/virtual integrated reality - we are not just designing user interfaces, we are designing users.
Mark Bolas is the Director of the Mixed Reality Lab at the USC Institute of Creative Technologies and an Associate Professor in the Interactive Media & Games division of the School of Cinematic Arts, where he directs the Mixed Reality Studio. His work focuses on researching perception, agency, and intelligence - creating virtual environments and transducers that fully engage one's perception and cognition to create a visceral memory of the experience.
Bolas leads research projects for the Army Research Office, the Office of Naval Research, and DARPA, as well as a variety of other clients, including content for the entertainment industry. He has led the development of a number of influential products including the open-source FOV2GO, which informed the design of the Oculus Rift; the Wide-5 HMD; Pinch interface gloves; and the Boom and Molly telepresence system. Bolas' 1988-89 thesis work "Design and Virtual Environments" was the first effort to map the breadth of virtual reality as a new medium.
In addition to USC, he has taught at Stanford University and Keio University, exploring tangible interfaces, augmented reality, and computational illumination. These projects have explored context-sensitive audio interfaces, socially interactive toys, augmented reality, confocal illumination, and mobile phone web logging.
Bolas co-founded Fakespace Labs, Inc. in 1988 and developed and sold VR hardware and systems for dozens of major research labs over the decades. He holds more than twenty patents and has been recognized with awards from the Consumer Electronics Association, Popular Science, SIGGRAPH Best Emerging Technology, IEEE's Industry Excellence, and IEEE's Virtual Reality Technical Achievement Award.