Updated Multiprocessor VR System
So, here's the general block diagram of my latest VR controller system
idea. The primary source for this generation of inspiration came after
reading an article in Circuit Cellar
INK, the November 1995 issue, about Transputer-based Parallel
Computer design and methodology.
If you saw my previous idea you may
have noticed some rather obvious flaws in my thinking. From what I've
been able to figure out, DSPs process signals, they
don't usually generate the signals.
In my new design I have taken this into
consideration, and have implemented this other little feature I've
discovered about new processors called NSP. (Native Signal
Processor)... Basically a built in DSP on newer Pentium+ et al
chips...
The Master Controller CPU:
- Handles global interactions between devices (slave CPUs, DSP,
controllers)
- Manages tasks, farms out objects and interactions to slave CPUs
- Manages A/V DSP tasks, pattern recognition algorithms, etc...
- Relays collision detection info from A/V DSP to appropriate slaves
The Slave CPU:
- Handles objects, micro-interactions and micro-environment as
assigned by the Master CPU.
- Generates audio and video to be processed by the A/V DSP
- Reacts to collisions detected by A/V DSP
- Handles specific A/V DSP-selected signals for object interactions
and more advanced AI processing
The A/V DSP:
- Merges A/V signals from the slave NSP outputs
- Detects object collisions; passes info on to Master Controller CPU
- Maps external A/V inputs into environment and onto objects from the
slave NSP outs.
- Classifies signals (pattern recognition) and passes results on to
master CPU
- Output final A/V "view" to the user.
Now for the whys of this:
I want to create a system which can operate by itself, and as a
controller for a few or many external devices, computers, networks, the
world...
- I want to be able to map external audio and video into this
world on the fly, without having to tie up my virtual-world generating
processors.
- Collision detection seems to be something best down by a dedicated
processor. All interactions in a virtual world can be thought of as
collisions, whether they be "shining" a light on an object, "typing" on a
virtual keyboard, or designing and testing a new prototype.
- Some degree of simulator sickness experienced is due simply to lag in
response to changes in virtual position and the time it takes to render
these changes. (Arthur Zwern
discussed this in a talk of his at the Spring VRWorld '95 about HMDs)
This is why I have set things up in this slave NSP to A/V DSP
configuration; the slave CPU renders the pieces of an object in a simple
plane of reference, it's NSP takes in the output generated by the
slave's video processor (video card/chip) and maps it into position,
smooths some details, maps textures, etc... The combined object NSP
output is then passed to the A/V DSP to be mapped into "world" position,
collisions are detected in the interactions of these objects and the IDs
are passed back through the system as necessary. It's the A/V DSP that
controls what the user sees and interacts with, maps the desired output
out. By encorporating prediction algorithms (good old control systems
principals) into controlling the A/V output, everything should speed up
an extra degree.
Any comments? I wonder if anyone is
reading this, if there is some fatal flaw in my logic... I've just
discovered general descriptions of JAVA as a programming language... This
seems to be an ideal programming and design environment for this idea...
Anyhow, here's the diagram itself:
Enjoy!
VRML list
Send comments to: Tekmage
Teknomage Industries,
Copyright (c) 1995