Thursday, October 05, 2006

The beginnings...

How do you define a virtual museum?

  • (2006) From Wikipedia, the free encyclopedia
"A virtual museum (sometimes web museum) is an online website with a collection of objects (real or virtual) or exhibitions. They include contemporary, historical and sometimes artistic content. Examples include the Virtual Museum of Computing. Some are produced by enthusiastic individuals such as the Lin Hsin Hsin Art Museum; others, like the UK's 24 Hour Museum and the Virtual Museum of Canada, are professional endeavours."

"A virtual museum is a collection of electronic artifacts and information resources - virtually anything which can be digitized. The collection may include paintings, drawings, photographs, diagrams, graphs, recordings, video segments, newspaper articles, transcripts of interviews, numerical databases and a host of other items which may be saved on the virtual museum's file server. It may also offer pointers to great resources around the world relevant to the museum's main focus."

2 comments:

Smithsonian Latino Center said...

Other models to study...
Capturing Content for Virtual Museums:
from Pieces to Exhibits
Bradley Hemminger, Gerald Bolas1, Doug Schiff2

Museums have always recorded information about
their content, by individual items, collections, and
exhibits. With the advent of photography, and
especially recently with digital photography,
museums increasingly record 2D pictures of items
and sometimes scenes to complement text
descriptions. In addition to using this descriptive
information for their own uses, museums are
beginning to make some of this 2D content available
via the Web. The ability to conveniently take
multiple photographic views and laser scanned
representations of single objects has made possible
increasing realistic and accurate recordings of
objects. These methods allow for the capture not
just of the visual appearance of the object, but also
an accurate 3D spatial representation. This spatial
information is of high enough quality to allow
scholarly study and comparison of objects (Rowe
2003b). The methodology in this paper builds on
previous work to capture both visually accurate
information (photographic texture and color) and
spatially accurate information (laser scanning) and
integrate them into a combined virtual reality model.
Below we discuss the different methodologies used
to capture 3D representations of objects and scenes.
It is important to distinguish true 3D scene scanning
from methods that capture multiple 2D images, and
stitch them together for a panoramic view or
interpolate between them to estimate other views.
Sets of 2D images do not capture the spatial
information in a true 3D scan, nor do they permit the
viewing of the 3D scene from arbitrary viewpoints,
or with arbitrary choices of lighting and
visualization conditions. The methodology
proposed in this paper as part of our Virseum project
captures museum exhibits (setting and artifacts)
precisely. We use techniques that capture spatial
geometry accurately (laser range finder covering a
full 360 scan in the azimuth and 270 degrees
elevation), plus 2D high quality images to capture
color and texture of polygonal surfaces in the scene
(tied to laser range finder data), and very high
quality 2D images for capturing the texture color for
important object close-ups (paintings, sculptures,
etc).
A 3D spatial model of a scene may be constructed
several ways. The goal is to “produce a seamless,
occlusion-free, geometric representation of the
externally visible surfaces of an object”, or in the
general case a collection of objects (Levoy 1997).
Modeling a scene by abstracting objects as simple
geometric surfaces (such as with a computer aided
design program) makes the representation of the
scene simpler (fewer triangles describing surfaces).
The tradeoff is that it is not as accurate (abstraction
rather than measured), and it is simplistic in
appearance because of the simpler representation of
surfaces and their textures. Examples include early
work at creating models of historic sites, or the more
simplistic movie special effects of early computer
animation films. More accurate and realistic models
can be generated by sensor readings of a scene.
These fall into two categories: passive sensing
(camera recorded images) and active sensing (laser
range finder recorded spatial coordinates). A good
discussion of active sensing versus passive sensing
is given in Levoy (1997). Passive sensing requires
reconstructing a scene by solving for scene
illumination, sensor geometry, object geometry, and
object reflectance given multiple static 2D
photographs taken of a scene. This continues to be a
difficult to solve problem in computer vision
primarily because it requires accurately finding
corresponding features (points) between the different
images. Active sensing devices such as laser range
finders can be used to produce lattices of
measurements of distance from the sensor
location(s) to objects in the scene. The challenging
part of this process is reducing the “clouds” of points
measured by the multiple scans into a small enough
number of polygons for real-time rendering. This is
done by discarding redundant points from multiple
scans, and by combining very small polygons into
larger polygons when appropriate (e.g. large flat
surfaces such as walls).

Smithsonian Latino Center said...

I am sending you the pdfs to some of this stuff ---we need to use it in our documentation. I need to work on the outline of methodologies we will be using.