Posts Tagged high-end server
High-end graphics workstations (such as the Silicon Graphics RealityEngine) are typically optimized
for the display of textured surfaces, while low-end workstations (such as the Silicon Graphics Indy)
are typically optimized for the display of untextured surfaces. Given these capabilities, the most
obvious way to partition rendering between a high-end server and a low-end client is to omit surface
texture on the client. To demonstrate this, we consider a room composed of flat surfaces that exhibit
smooth shading and texture (see figure 2). The model contains 1131 polygons with a fixed color at each
vertex. This color was calculated using a hierarchical radiosity algorithm that approximates the
diffuse interreflection among textured surfaces. The high-quality rendering (figure 2a) employs
antialiasing, Gouraud-interpolated shading, and texturing. The low-quality rendering (figure 2b)
employs antialiasing and Gouraud-interpolated shading but no texturing. The difference between the two
renderings is shown in figure 2c.
Figures 2d through 2g show image-based compression of the high-quality rendering using varying JPEG
quality factors. Figures 2h through 2k show polygon-assisted compression using quality factors selected
to match as closely as possible the code sizes in figures 2d through 2g, assuming that the geometric model
resides on both machines. The quality factors, code sizes, and compression rates are given below each
image. Figures 2l and 2m give one more pair, enlarged so that details may be seen.
Table I: Image-based compression versus polygon-assisted compression, compared for the scenes pictured in
figures 2 and 3. In D, we select a quality factor that gives 20 frames per second while requiring 2 Mbs or
less of network bandwidth. In E, we select a quality factor that matches as closely as possible the code
size obtained in D, assuming that the low-quality geometric model resides on both client and server. In F,
we estimate the number of bytes required to generate D assuming that a losslessly compressed
low-quality model is transmitted from server to client.
In every case, polygon-assisted compression is superior to image-based compression. There are two distinct
reasons for this:
1. The polygon-assisted rendering contains undegraded edges and smoothly shaded areas – precisely those
features that fare poorly in JPEG compression.
2. The difference image contains less information than the highquality rendering, so it can be compressed
using a higher JPEG quality factor without increasing code size – even higher than is required to compensate
for the division by 2 in the difference image representation. Thus, texture features, which are present only
in the difference image, fare better using our method.
As an alternative to comparing images at matching code sizes, we can compare the code sizes of images of
equal quality. Unfortunately, such comparisons are difficult because the degradations of the two methods
are different – polygon-assisted compression always produces perfect edges and smooth shading, while JPEG
never does. If one allows that figure 2j generated using polygon-assisted compression is comparable in
quality to figure 2d generated using image-based compression, then our method gives an additional 3x
compression for this scene.
Table I also estimates the number of bytes required to generate figure 2m if the model is transmitted from
server to client using the lossless compression method proposed. This size (13529 bytes) lies between the
code sizes of figures 2e and 2f. Even in this case, polygon-assisted compression is superior in image quality
to image-based compression, both in terms of its edges and smooth shading and in terms of the JPEG quality
factor used to transmit the texture information.