news | project | investors | gallery | links
about | overview | features | contact
project / concept   

technology
innovation
concept
Concept

Let's sum up what our hardware will be able to do for us and then follow our imagination where to applicate it:

  1. Avalon - The microchip

    The Avalon concept is based on the idea to provide heavy acceleration in raytracing rendering to provide at least realtime (minimum of 25 frames per second) at a resolution of 800x600 pixels with 4 to 8 times oversampling (antialiasing) and 3+ dynamic lightsources with a single chip. Adding further chips and connecting them by a dedicated serial interface you can increase the render power by nearly 100% per chip. This power can be used to diplay at higher resolutions or higher scene complexity.
     
  2. Geometry

    1. We can build up so called "static scenes" which have to be uploaded to the chip, precalculated and then stored back at the host. These scenes can contain a huge bunch of geometry data build from triangles. Each triangle will take about 64 bytes (we try to reduce that) of host memory (PC-Memory).
    2. "Dynamic objects" like players in games, doors, animals can be of as much as about 40.000 to 80.000 triangles all together. Level of detail support is done by the API in software in the first chip generation.
      The distinction "static" and "dynamic" is a trade of we had to make to use the raytracing rendering method. Raytracing needs most of the complete scene information during the render process, but due to the heavy calculation power needs there must be an acceleration concept present, that reduces the number of calculations. This leads to a kind of precalculation that have to be done over the scene data. This precalculation can be handled offline for static scenes (a few seconds) and must be done in realtime (<40 milliseconds) during rendering for the dynamic part.
    3. Dynamic objects can be stored like a key frame animation. The AVALON chip generate intermediate positions automatically (with request from the software). All matrix manipulation is done internally. There will also be the option to hierachical stack those operations. This way a bone-system is easily provided.
    4. LOD (Level Of Detail) is a function to reduce complexity of very large scenes. This is done in a combination of design, software and hardware. The designer has to produce several levels of detail (supported by todays render packages). The software (our API) is responsible to select the appropriate levels and the hardware selects from these levels automatically depending on distances from the observer.
       
  3. Shading

    After finding the intersection of a light ray with the geometry there must be some shading. This shading will be done in hardware too.

    1. Refexion and refraction can be used with ease. No more definition and creation of cubic environment maps. Simply switch on reflection, define color and other simple to use parameters and the object reflects. These reflections are naturally updated each frame by using the raytracing process. And not to forget: This is an iterative process. If you have several objects reflecting or refracting, they interact with each other.
    2. Light sources can be chosen from directional, point and spot-lights (with radial or square shape). They can have an illumination area which influences how shadows are generated.
    3. Shadows are calculated with color and shape in a natural way. No programming any more. The shadow calculation is as accurate as you set your parameters. You can have sharp shadows from point light sources or soft shadows from area lights. You can choose from Raytraced-Shadows or Shadow-Maps (which have problems with transparent objects, but provide faster overall performance)
    4. Standard shaders used by 3D-Studio Max 3.1 are supported. "Blinn", "Phong", "Oren-Nayar-Blinn", "Metal", "Anisotropic".
    5. Several texture units are used in combination with a kind of MIP-Mapping and filtering to enhance image quality. There can be up to 8 layers of textures on surfaces. We will support ambient-, diffuse-, specular-, specular-level-, opacity-, bump-, refexion-, highlight- and [shadow-mapping]. These are 2-dimensional textures.
    6. Additionally we will support 3-dimensional textures with noise and turbulence. A lot of parameters are used to configure these textures.
    7. Then the so called atmospheric shaders will be implemented. They are used to generate realistic fog, smoke, clouds, fire, explosion and other "special effects". These effects are completly 3-dimensional effects. So you can walk through the fog or smoke and your view will be affected like you know from nature. Several parameters are used to configure these effects. They may create shadows on geometry (or themselfes !), can receive shadows, reflect light from lightsources and can fill up volumes that you can define.
    8. The so called "Caustics" are used to increase visual appearance. In the situation where caustics are used the light is traced from the lightsource to the object (like in nature) and not from the observers eye to the scene (the standard raytracing). This way it is possible to create light reflections on OTHER objects. Examples are the wine glass with wine in it refracting light to the table or the swimming pool with glittering light on the ground.
    9. The so called "global illumination" will be implemented, but can not be used during realtime rendering with only one chip. Using a few chips working in parallel it will be possible to enhance lighting with indirect light by global illumination in realtime mode. Indirect light is an important factor in generating real looking images, but due to the calculation power issue it can not be provided by only one piece of hardware. Second generation AVALON chips will offer you this feature with greatest probability.
    10. A programming shader unit is under concept development.
       
  4. 2D-Post processing

    After rendering images they can be enhanced by a 2-dimensional image processing step. There will be an small array of DSPs (Digital Signal Processors) in the hardware that can process several custom programs on the frame buffer.
    1. Cheap smoothing via filters. This is a kind of antialiasing which needs only few processing power (in contrast to sending lot of rays to the scene and average the resulting colors).
      Quality is (unfortunately) lower than standard antialias but in several situations there is no need for more.
    2. Glow effects or simple neon effects can be achieved by another 2D operation.
    3. Add your own programms ...

What is it good for ?

Although we design this chip with games in mind there are a lot of other possible applications.
As you can imagine (if you have first impressions from Z-Buffer technology) the software developer has extreme less work to generate and display scenes. With AVALON all visual effects can be handled by the hardware itself. When a designer has build a scene all visual effects are already included. The only part left to the developer is to animate the objects, lights and maybe the surface appearance [beside the other things to do whithin the software application].

  • Due to the very large geometry data sets that can be handled by the AVALON chip it will be easy to do realistic looking simulations like flight simulators with realistic ground (water, trees and buildings).
  • Due to the very high quality it can be used to accelerate creation of scenes as a support to standard render software packages (3D-Studio Max, Maja, Softimage, Cinema 4D, POV Ray ...).
  • In eCommerce applications it can be used to give a very realistic impression of the goods for sale with the option to turn or move things in realistic environments.
  • In architectual applications it can be used to generate realistic walk-throughs. Sun light, furniture details and surface appearance will be displayed with natural look and feel.
  • Rendering on large screens at very high resolution will be no problem anymore. Thanks to automated load balancing, several AVALON chips are able to produce cinema resolutions of renderings in realtime (4096 x 4096 or even 16384 x 16384). Up to the needs and budget :)
    Whats about producing a movie digitally and displaying it directly from the digital source ?
    Inviting guests to be part of the movie ... as an observer within the movie or as an actor ? Or dont "finish" the movie, let it continue with guests ...
  • Medical, chemical, physical, automotive, flight- and aerospace applications. All they can benefit from this little piece of hardware.