The world is awash in visible information, almost all of which is undetected. Many things that we would like to see remain unseen because eyes and cameras cannot look everywhere at once. Simple mysteries, such as the beating of a butterfly wing or where each needle lands when a tree falls in the forest, could be resolved if we knew where and when to look. Exciting stories, such as how every player on the field and every fan in the stands responds to a well hit baseball, could be told if only we could capture all the information in the light around us. Aqueti capture all the information in the light they measure. There is no need to know where to “point and shoot.” We capture everything that a camera could see looking everywhere with revolutionary efficiency.
Parallel processing, which has been the lynchpin of supercomputing for the past quarter century, is Aqueti’s key innovation. Just as supercomputers are built from arrays of microprocessors, Aqueti builds supercameras from arrays of microcameras. If we had 1000 people, our cameras and eyes could look everywhere at once. This is the power of parallel processing. Unfortunately, 1000 cameras would cost a lot and take a lot of space. Aqueti’s multiscale camera technology implements parallel processing on the microscale to build cameras that capture the field of view of 1000 conventional cameras with a cost and volume comparable to a single system.
Aqueti’s supercamera infrastructure includes
- The MC2 microcamera,
- Gigagon spherical objective lenses,
- The Aware macrocamera architecture and
- Zoomcast services.
These technologies incorporate patented and patent pending innovations in mulitscale lense design, monocentric optics, focus and exposure control and image data management. Mostly, however, they incorporate the skill and technical knowledge of Aqueti’s computational imaging team.
MC2 microcameras use revolutionary 3D sensor chip integration to allow dense packing of many arrays. Microcamera optics incorporate novel aberration control and ultracompact focus and control systems with specialty high performance electronic read-out systems. The photo above shows the sensor and optics packages, along with the microcamera control module. MC2 microcameras allow Aqueti to construct “virtual focal planes” of arbitrary size. Moving beyond this revolutionary capacity, Aqueti has created an integrated microcamera control architecture that allows parallel and independent adjustment of focus, exposure, frame rate, region of interest and resolution within large microcamera arrays.
Gigagon lenses are multilayer spherical balls designed to reduce chromatic and spherical aberration while delivering large aperture images to spherical focal surfaces. The AWARE 10 gigagon lenses is shown above.
While high performance spherical optics have been used over 150 years, these lenses have not previously found wide application because it is difficult or impossible to refocus a spherical lens on multiple object ranges. Aqueti’s Aware macrocamera architecture, as illustrated below, overcomes this problem by integrating large arrays of microcameras on a spherical dome surrounding the gigagon lens. Each microcamera may be individually focused on objects within its narrow field of view. Typically, a microcamera observes only 2 degrees out of the 100 degree Aware field. Aware macrocameras incorporate patent pending focal mechanisms and microcamera integration and alignment technologies to achieve this task.
The analogy between supercameras and supercomputers becomes more direct as we move to the electronic processing and communications layer of the Aware cameras. Microcamera clusters interface with “microcamera control modules” (MCCMs in Aqueti-speak) to read-out data. Parallel MCCM communication and processing interfaces allow data streaming approaching 100 gigabytes per second. Aqueti’s proprietary gigapixel video data structures, e. g. Zoomcast technology, allow this data to be efficiently recorded, processed and communicated for diverse applications.