VR Vocabulary

If the military taught me one thing above all else, everyone needs to be on the same page.  As a communications engineer and photographer, it was a double whammy.  Photography, especially panoramas, seems to be as equally diversified as any other discipline out there.  Just get online, (FIreFox I hope), and look up some of the definitions in panoramic photography and virtual tours.  The list below is far from the final say on what is what, but it is how I understand what I do when it comes to VR photography.

VR Vocabulary

Per Wikipedia, an Application Programming Interface key is a unique identifier used to authenticate a user, developer, or calling program to an API. However, they are typically used to authenticate a project with the API rather than a human user.

API’s enables companies to open up their applications’ data and functionality to external third-party developers as well.

See Below for additional API information and MapBox.

The size of a photo is determined by the camera.  My Canon 5D Mark IV is rated at 30.4 megapixels (MP) when shooting in-camera RAW format.  To calm the purist out there, the actual sensor size is 31.7 and the difference between actual and effective pixels is beyond the scope for now.  So when you hear someone talk about camera pixels they should be talking about the effective camera pixels.

As with most pro-level DSLRs, there are also settings for shooting JPEG file format (which I rarely, if ever use anymore).  Often you can shoot RAW and JPEG images simultaneously.  JPEG image sizes are selectable and will vary from DSLR model to model.  My Canon has three RAW and eight JPEG formats to choose from, but as I stated, RAW is preferable.

If you take anything away from this remember, that a camera’s pixel size has nothing to do with the quality of the camera.  Pixels are how large an image can be printed.  At 30.4 MP I’m able to produce an A4 print (16.5 x 23.4 in) with great quality.

If you want to know more about camera resolution, this is a  good article, “Camera Resolution Explained.”

Dynamic rage defines a camera’s ability to measure, or see, the light intensity range from the shadows to the highlights.  If you are in a low-light scene (low-key) the dynamic range is very small.  In bright light (high-key) scenes the dynamic range is much higher and quite likely outside the range of the camera.  Keep in mind that there is a dynamic range for the camera and a dynamic range for the subject.

Cameras and different sensors will have higher or lower ranges but as long as the range of the subject doesn’t exceed the range of the camera you will get a perfectly exposed image.

Look at it this way.  A stop of light is either twice or half the light being measured.  From f/2.8 to f/4 is one stop down or half the light – the aperture is getting smaller.  From f/5.6 to f/4 is one stop up, or twice the amount of light being measured – the aperture is getting larger.  You can look up the rest


The human eye is said to have a range of around 24 stops of light, but the eye is dynamic and is changing its aperture all the time.  If you look at the eye as a camera and fixed it at any instantaneous point, the rage is around 10-14 stops of light.  A point-n-shoot camera range is around 5-7 stops of light, but a good DSLR can have a range of around 8-11 stops and is quite close to the human eye.  But all this only applies to normal daylight.  During evening hours looking out at the stars, the human eye can achieve even higher instantaneous ranges.

Any basic 2 dimensional image – simple photograph

A full panorama is defined as an image that covers 360-degrees alone the horizon and 180-degrees up and down and requires special viewers to view these images.

For our purposes, there are three types of full panoramas; equirectangular, cubic, and “little planet”.

Equirectangular projections are made up of multiple (minimum of 6 with a fish-eye lens) images that don’t have a limit, although most software is limited to 360 x 180 degrees which makes it the choice for full-spherical panoramas.  Similar to cylindrical projections, only the vertical and the horizon lines are visually straight.  Everything else is observed as curved.  Most interactive panorama viewers today use equirectangular or spherical projections.

A Cubic projection maps a portion, or the entire sphere, to flat images.  The images are arranged in the form of a cube and viewed from its center.

Because the cubic projection consists of six sides, front, back, left, right, nadir, and zenith, each has a 90 x 90-degree field of view.  This means that each cube face will have straight lines and will be much easier to edit in post-processing.

Images in the Cubic projection are commonly used as the image source by several spherical panorama viewers, including SPi-V and Quicktime.

Little planets, or Stereographic projections, are nothing more than a full equirectangular spherical panorama that includes the nadir and zenith images and is mapped to a fish-eye image.  Like looking at an image from its pole to a flat surface.  The projection is from the zenith view looking out.  You can also create a “tunnel view’ by rotating the pitch -90 – but that’s beyond what we need to get into right now.

A gyroscope is a device that is used to maintain a reference for direction and/or provide stability.  You’ll find them in ships, spacecraft, and most smartphones.  In smartphones, they will sense angular rotational velocity and acceleration.  It is required in a smartphone to be able to watch 360 videos or photographs.  The tour or photo moves when we move our phone/tablet.

To keep it simple, High Dynamic Range photography is no more than taking multiple ‘bracketed’ photos at different exposure levels, normally by adjusting the shutter speed, and combining “stacking” them into a single image in software and extending the overall dynamic range.  The result is an image with a greater exposure range than what a single image can produce.  (also see Dynamic Range).

The best results are achieved by using a sturdy tripod, mirror lock-up, and a shutter release mechanism to ensure “tack sharp” images.

A node refers to a single panorama.  A node can range between 0o to 360o (left to right) and/or 0o to 180o (Zenith to Nadir).  More than one node linked to other nodes creates a virtual tour.

Parallax is the reason we see in 3D due to the two different ways we view an object with our eyes   An object’s actual position when viewed doesn’t change but seems to change when we close one eye then the other.

It occurs in panoramic photography if the camera and lens are not rotated around the entrance point of the lens – not the nodal point.  You can view this in the overlap when two adjacent images are viewed together.

Parallax correction is a bit complicated and impossible to correct professionally without the right equipment and a calibration procedure.  Also, keep in mind that any calibration is going to vary by camera and lens configuration and the focal length of the lens (the entry point will change at different focal lengths).

A patch or patching is a method of extracting and modifying a specific area of a panorama (non-destructively).  For our tours, that means bringing the patch into Photoshop, editing,  and returning it to the panorama.  Often used to replace, or correct the Nadir image – removing a tripod for example.

There are several ways to display partial panoramas – panoramas that don’t fill the entire sphere (either in the horizon or up and down perspective).

The difference between full panoramas is that partials can be displayed if they don’t exceed more than 120-degrees along the shorter side.  The more common are cylindrical, rectilinear, and partial spherical.

Cylindrical panoramas are the result of projecting onto a sphere or cylinder and are often used in landscape photography (not well suited for architectural photography).  They can have a horizontal view of up to 360 degrees.  Vertically, the projection is around 180 degrees with a practical limit of about 120 degrees.

Rectilinear panoramas project the subject like any ordinary lens would.  Straight lines are straight because the horizon and up and down fields are limited to 120-degrees.  This makes them great for architectural images.  Beyond the 120 views either way they begin to distort, especially at the corners.

Partial spherical panoramas are the same as full panoramas except that the nadir and zenith views are excluded from the field of view.

A picture element (aka pixel) is the smallest block used in creating an image on a screen.  They are square and undetectable by the human eye and arranged on a grid with each having a different color.  And for the smarter ones out there, yes, they are made up of red, green, and black (RGB) sub-pixels which is beyond the scope of what we need to know here.

Pixel count, or image dimension, is the number of pixels across the length and width of a digital image.

If you have Photoshop or any other photo software and you zoom in a 1000 % or more you’ll see what I’m talking about – this is also called “pixel peeping”, often a result of an overzealous critic of photography.

(See digital image size next.)

In combination with a tripod, a panorama head provides photographers with the means to shoot images around the entrance point (pupil) of a lens to produce either high-resolution (gigapixel) or 360o equirectangular spherical panoramic images.

  • Spherical 360o heads that manually enable 360o panoramas or gigapixels images
  • Robotic which are precise, automatic, software-driven pano heads

Determining and calibrating the “entrance pupil” is required for each camera/lens-focal length combination.

Screen resolution is not the same as digital image resolution but is used to describe screen resolution.  My BenQ monitor has a fixed resolution of 2560 x 1440 with 109 PPI.  Higher resolution monitors have smaller pixels and are more tightly packed and images on these monitors will be sharper than the same image on a monitor with lower resolution – up to a point.  The age of your monitor, your eyes, and the distance from your monitor will also affect how sharp an image is.

Because the pixels count is fixed the resolution of a photo will not have any impact on how that photo looks on your monitor.  A 72 ppi or 2500 ppi setting, when exported, will look the same.  It is the size of the photo (length and width) that will determine how the image will look on any given display screen.

Now, this is where the confusion starts.  DPI and PPI and not interchangeable!  I defined DPI as a physical property of a printer and it has nothing to do with digital images.

I can’t count how many people have come up to me with a fantastic image they got on their cell phone from a friend and wanted a print made.  When I look at a file after an email service has re-sized the image, it looks great on the screen at say 1080p x 640p @ 72 dpi (pretty standard size for viewing on screens, websites, etc.).

When I try and print this image, and set the PPI from anywhere from 180 to 300 PPI there’s a problem.  I try and avoid photography math as much as possible but,  at the lower print quality of 180 PPI the math is as follows; 1080 pixels/180ppi and 640 pixels/180ppi you get an image size  6” x 3.5”.  Not quite the 8” x 10” they were expecting. 

You can verify this in Photoshop if you go to Image Size, uncheck the resample box and change the resolution.  You’ll see that the file size and dimensions of the original do not change but the height and width of the image do get small or larger depending on the resolution.  In a nutshell;

  • A smaller PPI value equals a large, low-resolution print
  • A larger PPI value equals a smaller, high-resolution print

This is why printing is considered an art form all by itself.

Responsive panoramas, simply put, are sensitive to what devices are viewing the panorama, then resize the content to accommodate for the screen size of that device.

When I’m building a panorama it’s all done on my desktop with a large monitor.  Great internet speed, a fast computer, and a high-end video card provide a great experience.  Most of the images and other content are at least set up for viewing on a screen (1080 @ 72 dpi).  but on a cell phone or some tablets, the panorama may take some time to load add in a slow hot-spot or WiFi connection it will take too long, for most people, to load up.

To compensate, smaller images and content elements are re-sized automatically for tablets and cell phones.  This speeds up the loading process and a better viewing experience.  The downside, it increases the output file size if you’re storing your files, or on my servers.

t

A skin is a collection of text, graphical, and control elements overlaid on an image (or surrounding an image). A controller skin, for example, includes graphical elements that allow you to manually control the outputted project – virtually interactive

After you have stitched a 360o simple/spherical panorama or taken a spherical photo making sense of them can be challenging.  To make use of these images you need software to convert the photos, in our case, to an HTML5 format to view the panoramas.

Once a tour is finished, panoramas are exported in an HTML5 format to an output folder.  The folder contains the images, videos, music, and everything needed to view the project, including an index file that is used to execute the tour. The output folder can be uploaded to your web server, or you’re a WordPress client, a handy plugin is available to post directly to your site.

Virtual reality (VR) tours are a simulation of any space or area captured by still images, stitched, and converted into 360o x 180o nodes.  Music, video or photo popups, external links, restaurant menus, employee photos, and much more can be integrated into any node.

Panoramas imply an unbroken view.  A tour can be a single node or can contain multiple nodes tied together via photo hotspots into multiple-node tours.  There’s no limit to node numbers.  The downside, multiple nodes create very large output files.

With special software editors, customized panoramas can be created by adding panorama control buttons, site maps, street maps, or satellite maps.  VR tour content possibilities are limitless.

WebVR has been replaced by the WebXR Device API, which has wider support, more features, better performance, and supports both VR and AR.  So for our purposes, we will deal with WebXR only.

WebXR enables you to create an immersive experience directly from a web page without the need for an app.  Most of the current VR headsets can display WebXR content.

WebXR, as of 2019 is the evolution from WebVR, of virtual and augmented realities, making it easier to create immersive, interactive environments, VR tools, and much more.

Browser support is always important to emerging technology.  Chrome is currently leading this evolution and Firefox is making experimental technology available but is subject to change.  We can hope that multiple platforms will be available soon.

A panorama is any wide-angle image of a real or imagined three-dimensional space that is projected (mapped) to any two-dimensional surface.  For our purposes that would be a print or any monitor or screen.  That’s where the simple stops. Wide-angle can take on a whole new meaning in photography.  Equipment-wise there are wide-angle lenses, fish-eye lenses, and panoramic cameras.  Or you can take a conventional ‘kit’ or any other non-wide angle lens, shoot an image, move the camera a few degrees (generally clockwise) in the same horizontal plane, shoot another photo, and keep shooting until you get back to where you started and you have a single row simple panorama.  You can include a cell phone anywhere in there if you need to. Now if you have a good tripod and a ‘pano-head’, move the camera up or down a few degrees, shoot another 360o you’ll have a multi-row panorama.  This and the above simple panorama are referred to as high resolution or giga-pixel images/panoramas.

Embedded Maps

I’m not sure if anyone noticed, but I don’t support “big tech” or other third-party map products.  At this, level, I’ve found that is a better solution.

Consider you are a single business and would like your website to stand out from the crowd.  That’s highly unlikely if you’re paying for an expensive “pin” on a street or satellite view with hundreds of other businesses.  If you decide to add additional map pins, then you’ll need to abide by the minimum guidelines of how far apart those pins can be, how many will be required, etc.  You get the point, it is cost-driven – each pin-hit is $ you pay.

Right now a MapBox API allows up to 50,000 free map views/month and $0.50 per 1,000 hits after that.  The other guys went from $0.50 per 1,000 loads in 2019 to $7.00 per 1,000 loads.  Something to consider.

VR Vocabulary

2022 A New Spring