A look into 1950’s Norway.

In the interest of cleaning out my old room, my mother recently handed me a box of old stuff. Among the old magazines and books were four plastic filled with reversal film slides dated 1950.

She claimed they belonged to the family – so I scanned all of them, curious to learn some new history.

But absolutely no one recognizes these people.

It’s a mystery. A rare look into 1950’s Norway.

Tasting lingonberries.

Fishing with a net.

Just chilling with friends.

View from the cabin?

Getting ready for some mountain fish.

Enjoying candy? Lady in the background in her husband’s jacket.

Partying in style.

Cold ankles.

 

 

These people are celebrating the 17’th of May.

17’th of May was probably the only fun day for kids in 1950.

A serious boy.


Teacher blowing a kiss?

A family celebrating 17’th of May – one lady is wearing a bunad.

Cake, balloons and soda.

Just doing some shaving in the woods.

Notice the jacket – better keep it clean.

This lake is called Vannsjø – “Water Lake”.

Giving instruction by the sea.

Illustrating.

Cooperating.


Reciprocating.

Winning.

Dinner by the sea.

A couple.

Out to buy ice cream?

(17.03.19: I have been told that this is Svenner lighthouse)

Reaching the small harbor.
(17.03.19: I have been told that this is Nevlunghavn)

Moose traveling the sea by sunset.

the chili project – pass 1

It’s not easy to be without internet, and I think I can illustrate the effect of several weeks of low ping lolcats  has on people of my generation (and demeanor).

I was prepared, in a way, for the lack of intellectual online nourishment.

The last thing I did before we went into the dark ages of mobile hotspots was to order something online.

kocclil

 

 

What you see above is the contents of a chili seed “mystery pack”.

 

jkbi1j1

The way I approached the germination might have been slightly coloured by my work (as a molecular biologist).

fh9svk4
OK. Then we wait.

ld2ixve

Now weeks have passed – and amazingly the germlings and I survived!

kbywwzz

 

Aji Fantasy (a C.Baccatum) was getting too big for the box, so I decided to try to repot the healthiest looking chilis.

sqswyqt

Well, I had an assistant.

bbivv8g

“Hi I will help you destroy everything”

ll1zw9f

At this point, soil will not be enough. These plants need “direct sunlight”, something somewhat hard to come upon indoors in Norway.

Do not despair! Burn your chilis with the POWER OF A THOUSAND SUN- … well, 6000 K.

I am testing this guy (35W CFL 6400K E27).

4vck02l

This is the setup after repotting some of the plants. I’m hoping most of them will survive – they were much less inclined to stand upright than I hoped!

Another thing to consider is protecting against predators.

  1. Inhibit the predator from attacking (see: glass bottle fence).

0vgfyvf

2. Convince the predator it should target something more convenient.

pyzdry1

Now it begins.

How to turn your digital camera into an IR camera

Just before the weekend, I had an unfortunate experience with a microscope that was lacking an IR-filter. This was unfortunate because I had labeled some proteins with an IR-emitting antibody, which was for all intents and purposes rendered 100% invisible.

This made me think – could I make my camera take in all wavelengths of light, even though I can’t see it myself?

Unsurprisinly, someone had thought of this before me. In “How to build your own IR camera”, Pieter Albertyn  explains how he removed the IR-filter from a cheap digital camera and replaced it with a handcut glass slide.

I threw on some pants and went to town, pestering my local electronics store for:

  • Tweezers
  • Some really small screwdrivers
  • Some glass (I wanted some cover glass, but had to settle for photo frame glass, which is much thicker)
  • A glass cutter

I picked out my old Panasonic Lumix DMC-T27 and prepared to wreck it. Amazingly, I found this gentleman to guide me through the process:

Test #1: Panasonic Lumix

Ideal to test since I had one handy and found a guide for picking it apart and had “borrowed” this camera from my father some years ago…

My father believes I ruin any electronic components I touch, so I braced myself.

Unscrew the camera and open the back cover. Notice that the LED-screen is attached with two cables:

If you like, you can detach these. They’re sturdier than they look. However, after picking the camera apart a couple of times, I noticed that you don’t really need to detach the cables to access the filter.

The thing between my fingers is the digital camera’s eyes (the CCD)! You can see I’ve detached the IR-filter that covered the CCD and placed it on the table. It’s a little blue piece of glass in a protective black gummy. Remove the IR-filter and put the gummy back in the camera. If you don’t put the protective gummy back, your camera will have trouble focusing. So put it back and reattach both cables.

A note regarding the guides I posted above: If you want to, you can replace the IR-filter with a new «filter» made of glass. As you can see, I managed to cut a piece of glass. However, it was much thicker than the IR-filter. It ended up dropping it, and found that the camera worked just as well without it. In fact, with this “filter” inserted, the camera could not focus.

Time to turn the Lumix on. What do we see?

In normal light, there is now a purple shine to everything. That would be the near-IR and IR light picked up by the camera!

Now for the exciting part: putting the IR-camera to the test.

Using a remote control with an IR light, this is the photo IR-Lumix captured when I activated the remote control aiming at my face:

Disappointing! Or is it? Let’s have a look in Photoshop.

What about a dark glass of Pepsi Max?

Now this image is unedited. This is the way colours appear in a camera that lets in IR-light.

A very interesting detail: notice the lamp clearly visible through the black drink!

You might say that this is not a true IR-camera. Indeed, the Lumix now detects IR light along with visible wavelengths. However, you can easily turn this camera into an IR-only camera by fitting a high-pass IR lens that will only pass IR-light to the CCD.

(The reason this is not covered in the manual is that my local camera salesman seemed to suffer a nervous breakdown when I explained my project to him.)

All in all – test successful!

How to make sense of 23andme raw data

The title of this post is not honest. This approach will not teach you how to make sense of the 23andme raw data. What it will do, however, is to supply you with the tools you need to do it.

This guide is written for Linux noobs, which is the level only slightly below myself. This approach works for me in a hobby setting. As such, I would welcome tips from seasoned bioinformaticians.

What this guide will not do is explain genetics or how to understand genetic variants. It is assumed that the reader is familiar with genetics and SNP data.

In order to use these steps, you will need to be able to navigate a Linux system by command line (although it’s certainly possible to do it through Windows and Linux emulation, but that’s probably even more messy).

Programs are executed on files located in the same folder in this guide, for simplicity.

The main steps are:

1. Download your raw data
2. Use 23andmetovcf to generate a .vcf file
3. Organize your data for Annovar input
4. Use Annovar to annotate the .vcf file and make a .csv file
5. Use Excel or a different analysis tool to navigate your data

I will go through these steps and supply the codes needed to do it.

Disclaimer: This guide is for educational and research purposes only. 23andme tests about 600-900k genetic variants. Some of these do have clinical associations – and some of them will be found to have clinical associations in the future. As such, you must have considered this before looking at your raw data according to these steps. You might find associations that are more confusing (!) than the report generated automatically by Promethase. If you are new to this, or personal genetics, I strongly suggest trying out a Promethase report first. 

You can also test this on the Mendel family raw data.


Step 1: Download your raw data from 23andme

Give it a name – for example mendel.txt.

It will look something like this:

Note that the human genome version is 37. This is not the newest version of the human genome, but is still the “standard” reference for most tools. Remember to always check that you are looking at 37 when looking up variants or positions from 23andme online.

This text file is about 15MB (5MB compressed) and contains all the information on your SNPs, including rs-IDs for the variants. Individual rsids can be looked up in dbSNP.


2. Use 23andmetovcf to generate a .vcf file

Download 23andme2vcf.

perl 23andme2vcf.pl mendel.txt mendel.vcf 4

You have now generated the file mendel.vcf.

Note: Indels are not supported by 23andmetovcf, which means we will lose these. The result is a file containing single nucleotide variants.


3. Organize your data for Annovar input

Download Annovar, which is free for personal, academic and non-profit use.

I used mendel.vcf to generate an input file for Annovar called mendel.avinput.

The input should be organized in five columns as:
|CHR | START | END | REF | ALT |
(chromosome number, start position, end position, reference allele and alternate (variant) allele.

Reference nucleotides can be entered as 0.

You can use Libre office to do this – remember to save as .avinput!


4. Use Annovar to annotate the .vcf file and make a .csv file

This is a very simple way to annotate according to the Annovar guide. The Annovar guide will give much better advice on how to do this in the best way. You may add and remove whatever information you like – I find this approach works fine for my purposes:

perl  table_annovar.pl mendel.avinput  humandb/ -buildver hg19 -out mendel_annotated -remove -protocol refGene,cytoBand,genomicSuperDups,esp6500si_all,1000g2012apr_all,snp138,ljb23_all -operation g,r,r,f,f,f,f -nastring . -csvout

You should now have the file mendel_annotated.csv, an annotated version of most of your 23andme data.

Congratulations, you now have a file with information about your genetic variants (well, a couple of them!).

It should look something like this (some identifying information is covered by white-out in this image. Also, some rows are colored in grey because I found them uninteresting).


5. Use Excel or a different analysis tool to navigate your data

There exists many sophisticated programs for filtering these files to clear away the findings that don’t actually mean much.

Any spread sheet type of text editor with a “filter”-function should work. I would recommend Excel – simply because Libre Office is really clunky to filter with.

However, for most practical hobby purposes, Excel works very well. For example, you can choose to look at only exonic variants, or only stop gains. You can also sort these by “pathogenicity”, that is, the severity of effect the variant has on the encoded gene product (for example by sorting PolyPhen2 or SIFT values).

Note: I have not figured out how to carry over genotype-information (homozygosity/heterozygosity) to the .csv-file. Annovar indicates that all rows after the first five will be kept as is, but I have not found this to be dependable. To check genotype, either use 23andme.com or look to the .vcf-file (for example, searching for position or rsid).

Protein Visualization and Virtual Reality

An essay written for the course Bioinformatics for Molecular Biology (MBV-INF4410, University of Oslo) in 2013 discussing the possibilities of protein visualization by VR, the Oculus Rift in particular. Includes useful links for people interested in the use of VR in molecular biology. You can download this text in .pdf format here.

Protein Visualization and Virtual Reality                                  

Introduction

Since the 1950’s, structural prediction of protein structures has been essential to furthering our understanding of molecular biology. In the 1970’s, the first crystallographically solved protein structure was visualized on a computer (Beem KM, 1977), and advances in the protein modeling field has been led by bioinformatics ever since.

Schwede (2013) argues that protein modeling has gone through a paradigm shift during the last two decades. The trend has moved from few available structures to automatic generation of protein models for most amino acid sequences. Three dimensional structures have been assigned for the majority of model organism proteins.

This shift coincides with another change – the parallel decrease in the cost of computer hardware and the leap in processing power and memory, making modeling and visualization of protein structures readily available to any motivated researcher.

At present, the main challenges in protein modeling is correct interpretation of the structure and enabling a realistic visualization of the structures themselves. In many cases, the final details of a polypeptide structure cannot be solved by algorithms alone, but requires human intervention. This requires educated human intuition, and is greatly enhanced by comparative modeling. However, visualizing proteins in a manner allowing for comparison and exploration is challenging, and as of yet not optimal.

The Cambridge Dictionary defines virtual reality  as “a set of images and sounds produced by a computer, which seem to represent a place or a situation that a person can take part in”. In this text, I will explore how virtual reality technologies could be relevant to this challenge. I will use the CAVE and the Oculus Rift as examples of relevant VR technologies and compare these. As virtual reality is a vibrant technology under rapid development, I will limit my discussion mainly to the relevance for protein modeling, and explore how this technology is being applied at present and what applications it could potentially have in the future.

Visualization of Proteins

Protein function involves interactions and conformational changes. In order to understand these functions, researchers need satisfactory visual representation that can be explored dynamically.

While a variety of desktop and browser programs are available for examining three-dimensional structures, these have the disadvantage of inadequate depth perception and difficulty of manipulating molecular orientations, especially when the two-dimensional screen gets cluttered with several models (Anderson A, Weng Z, 1999). The programs are also presented with a challenge when attempting to present transient protein interactions and intrinsically disordered proteins – situations where small changes make a particularly big difference.

Even during the early 90’s, developers saw better immersion as a solution to this problem. The argument was that 3D visualization would only be optimized if the researcher experienced the impression of being “in there” with the protein.

Virtual Reality Technology

The CAVE – A virtual reality environment

The earliest attempts at creating virtual reality environments involved isolating a part of the real world and, figuratively speaking, transforming it into a different one. This approach prevailed in scientific VR throughout the 1990s and 2000s – exemplified through the Cave automatic virtual environment (CAVE) system (Cruz-Neira et al, 1993). The name is fitting, as this VR is generated from a room with wall-fitted screens.

Immersion in the CAVE is created by motion capturing the position of the user in the room. Implementing a system where the user can use his or her whole body to interact was seen as an advantage over head-mounted «eyes only» systems. For bioinformatics, the scientist or student could walk around objects and crouch or perch to observe processes on different levels – a handy feature for models of molecular dynamics and membrane receptor binding (Cruz-Neira et al, 1993).

In 1993, the CAVE overcame the limitations of contemporary virtual reality technology. The image resolution was improved drastically in comparison with other systems, while the user was freer to move and interact with the virtual reality world than by using clunky and slow head mounts and hand-held controllers (Cruz-Neira et al, 1993). Scientists reported previously undiscovered molecular features «popping out» as they explored the VR environment (Schulze et al, 2011).

However, the disadvantages of the CAVE were many. Memory and processing limitations hindered complicated rendering of molecular structure interactions. Most importantly, the CAVE was a costly setup (Cruz-Neira et al, 1993). While the hardware limitations are less relevant today, the economic aspects of the CAVE are still problematic. The Star CAVE wall, built in 2007, cost $1 million, while  the less powerful NexCAVE setup cost “a few thousand dollars” (Schulze et al, 2011).

Data processing is problematic for virtual reality representations. In the CAVE, users reported annoying delays in the redrawing of molecular structures. Virtual realities also require a “fixed point” giving the user the impression of where he/she is. If you are unsure of your position in the “world”, you will have difficulty steering, but also feel disoriented.(Anderson A, Weng Z, 1999). This user issue could be particularly problematic for the CAVE, as the most common setup is to have a user stand upright in a room, not allowing for a “fixed point” in the representation (as opposed to the user sitting down and being fixed to a certain environment).

These user problems, in combination with the prohibitive high cost and complicated nature of the CAVE, explains why this setup is not common at universities.

The Oculus Rift – a stereoscopic display suitable for normal workstations.

The composition of a virtual reality system remains unchanged from the early 1990s. The system at its most basic consists of three hardware modules: A display (usually with two screens, delivering stereoscopic output resulting in a 3D effect), a computer (generating the graphical output to the headset) and a tracker (Bryson, 1996). A tracker module integrated in the head set enables movement of the visual field as the user changes his or her head position.

 In August 2012, Oculus VR created a crowd funded campaign asking for donations to fund the development of a new virtual reality headset. $2.4 million was raised, and developer kits were first shipped out in late 2012 [1]. Weighing in at 369 gram, the headset is both light and comfortable to wear, in contrast to older head mounted display technology. [2]

The principle of the Oculus Rift is creating virtual worlds through using stereoscopic 3D – the presentation of two images, one to each eye, through a single screen. The images are slightly warped, creating a three-dimensional effect. Movement is achieved by head tracking, and by moving his or her head the user has a realistic range of vision.

Software

Since its release, there has been a immense development in game support for the Oculus Rift. While Oculus VR and its developers have focused on how this technology could be optimized for computer gaming, utilizing the Oculus Rift for non-gaming applications is also feasible. For example, DigiCortex is an interactive neural pathway visualizer under active development [3].

Virtual reality support has been developed for PyMOlL by Virtalis [4]  both for CAVE-like wall displays and head mounted displays. The Virtalis software is a commercial product, not freely available to researchers or students. Developing PyMOL support for the Oculus Rift is challenging, as the newest versions of PyMOL are notably not open source (only the outdated builds are).

An alternative is viewing three-dimensional stereoscopic content modified for Oculus Rift directly through your browser. Vr.js is a simple plugin that wraps multimedia content into an Oculus Rift format [5]. This javascript library has been successfully implemented for Google Street View. Vr.js also allows for stereoscopic 3D presentation of panorama images, 3D models and interactive environments. It is feasible to imagine this plug in integrated with other .js applications, such as protein visualizers.

More complex visualizers are under development by utilizing raytracing techniques. In normal three-dimensional graphics, light rays are computed as triangles, but raytracing computes a “light ray” from every pixel in the scene [6]. This makes the rendering intensely realistic compared to older techniques. These types of visualizations require slightly stronger computers than a standard desktop computer, but with the low cost and rapid developments in CPU and memory technology, we can expect these applications to run on standard desktop computers in the near future.

Why Virtual Reality as Bioinformatics Visualization Tool?

The technology for generating virtual realities has become both available and affordable, but a central question remains – is virtual reality technology useful in science, and if so, why?

The current molecular modeling programs are based on visualizing three-dimensional structures in a two-dimensional space. When manipulating and examining structures in this way, the researcher will often experience a «fuzzy effect», where the display gets cluttered with structures and a clear overview becomes difficult.

Even in the 1990s, researchers reported feeling that receptor docking simulations were improved by virtual reality visualization. Engaging a researchers motor- and visual- skills could generate an intuitive docking setting. Humans are naturally skilled at visual pattern recognition in a manner not (as of yet) replicable by computer algorithms, and engaging this skill is a potential tool for speeding up automatic searching. Conceivably, researchers in a virtual reality environment could choose to restrict a search to an interesting site determined by the researcher’s knowledge and interest, not the automatic search function in itself. This could be very useful in cases where, for example, a binding site is unknown (Anderson A, Weng Z, 1999).

While a complete virtual reality setting with full immersion could be a holy grail for protein visualization and experimental modeling in the future, there are inherent difficulties with virtual realities at present.

First of all, the performance of the virtual reality representation is limited by the processing power of the computer used to generate the outputs. At the moment, a standard desktop computer can easily run simple virtual environments, but higher level simulations will require a more sophisticated graphics card, memory and processing power.

Secondly, latency is an important concept in virtual reality. Simply explained, when you move your head in a virtual reality environment, the images around your head must be redrawn. Latency describes how much time this process takes. A high latency causes the visual input to lag as you move your head to change your surroundings. Early virtual reality technologies suffered from high latencies, as the brain interprets lags to mean that the visual input is not realistic. A 5-15ms latency would be ideal for a total seamless exploration of a virtual world, and the Oculus Rift is getting quite close by lowering the latency to well below 20ms [7]. In comparison, the CAVE system has a 20ms latency [8]. The Oculus Rift Consumer Edition is expected to have even lower latencies, and this one of the reasons that the Oculus Rift might be more suited for molecular visualization than the CAVE.

Finally, two-dimensional interface paradigms could be problematic to implement in an immersive virtual reality. Input controls by mouse, keyboard or touchscreens are hard to implement if you are navigating a virtual reality through a head-mounted display. With modern virtual reality technology, these problems are decreasing rapidly, as evidenced by existing control functions such as the Razer Hydra [9] and LeapMotion [10]. Head tracking in itself allows for a certain degree of control of the environment.

As Kin (et al, 2004) explains, a user utilizing a head mount is not interacting with a two-dimensional screen display at all, but rather directly manipulating a model in three-dimensional virtual reality. This could for example be using to combine a ligand with a receptor, an example of docking modeling. In this way, the researcher could observe interactions at the active site from many directions, circle around these, and also focus on related sites of potential interest.

Another exciting possibility is the use of eye tracking – the Oculus Rift has the possibility of tracking the user’s focus, and activating functions by identifying which aspect of the environment is being looked at. It is interesting to imagine an interactive protein modeling tool with a HUD (head-up display) inside the visualization. In this thought experiment, the researcher could manipulate the program settings by simply navigating the virtual menu by eyesight. If the researcher could manipulate settings by visual input, manipulation of the protein itself could be carried out through motor input (hand tracking), such as selecting certain areas of a protein by hand and directing his gaze to change the coloration or representation of this selection alone. Such a setup would both be more intuitive and less time consuming that navigating a standard two-dimensional screen by keyboard and mouse. The thought might seem fantastical, but the technology for such a setup already exists and awaits implementation.

Conceptually, virtual reality allows researchers to use their own educated intuition to a larger degree than when using two-dimensional screens (Anderson A, Weng Z, 1999). Correct protein and ligand modeling (for example of amino acid side chains) require human intuition and experience.

In virtual reality space, ligand design and docking could be performed by hands alone. As mentioned above, other controls and overlays could be implemented.

Discussion

In comparison to gaming and others types of visualizations, scientific imaging stands in a unique position, often containing complicated three-dimensional data. Virtual reality offers a method of unambiguously displaying this. Exploring data in this manner allows for a much richer set of depth and spatial cues than would be possible with a two-dimensional screen.

An argument for using the CAVE system and not head mounted VR displays has been that using your whole body to navigate an actual «virtual room» is more realistic than the latter. However, while this argument might be valid for driving a car in a racing game, it might not apply to academic utilization of VR.  Bryson (1996) argues that virtual reality works better for science if not occupied with expressing a realistic world. A virtual reality for science can thus be oriented towards representing accurate rather than realistic representations. This allows for abstract experiments and investigations – such as protein interaction simulations in different environments – impossible in a “real” world.  These investigations could themselves be of intrinsic value.

Another aspect of the virtual reality in science is that a researcher would be able to explore regions not “computationally and mathematically” expected to be of interest (Bryson, 1996). In this sense, it can be argued that virtual reality technology and science makes for a far better combination that virtual reality and gaming.

However, despite promising developments, virtual reality technology is challenging to implement. Most new users are not accustomed to wearing a head mounted display, and neither the software nor hardware are as of yet sophisticated enough to be of serious academic use.

One might imagine the affordable equipment and the open source traditions of bioinformatics would create a natural environment for active development. However, bioinformatics stands in a challenging position when it comes to attracting developers for open source applications (Stahl M.T., 2005). One would need to draw developers and visionaries from two quite specific pools of specialists. In the case of virtual reality for protein modeling, these would be biologists and programmers. The virtual reality developers have primarily been interested in gaming, and do perhaps not easily see the value of applying their talents to an unfamiliar field such as biology.

Mentioned above, the Molecular Visualizer is to date the only application to model molecular structures for the Oculus Rift. As a prototype for the future possibilities, this application is impressive in its details and immersive effect. The developer of this application, is currently implementing a set of control functions for protein visualization, inspired by programs such as PyMOL (Favreau, pers. comm., 28 november 2013).

Conclusion

In 1987, Box and Draper noted that «Essentially, all models are wrong, but some are useful».

When considering protein models, three-dimensional models are certainly useful, even on two-dimensional displays. However, so are virtual reality representations of these models. The key point  is that points of view matter –  in a virtual reality environment, new aspects of a model could be discovered.

Virtual reality might be especially interesting for visualizing topologies and interactions in real time. For example, Oculus Rift is currently integrated with Google Street View. One could imagine a similar setup for a three-dimensional view of cellular organization, or even walking around a genomic landscape. While computationally taxing, one could also envision a real time landscape with controls for pausing, speeding up – or even “terraforming” epigenetic elements.

At the moment, software for academic purposes is sparse, but there are several programs in development. Favreau likens the usefulness of virtual reality in academic settings to music, suggesting that our brains are capable of understanding the contents of big data if it is presented in a way we can comprehend, just like we easily identify one song among thousands in seconds (pers. comm., 28 november 2013).

Suggesting virtual realities as the best way of visualizing protein structures might seem ambitious, but historically speaking the concept is not new to bioinformatics. In 1980, the following lines were written about stereoscopic images in the Teaching aids for macromolecular structures manual [11]:

            “As you look at the stereoscopic views you will see beauty beyond any previous experience.
We feel that the sense of beauty adds a force to the perception of macromolecular structure
and function which will make it possible for you and your students to understand
macromolecules as they really exist.”

34 years later, this idea still holds true, with a certain modification: it seems plausible that we – students and researchers alike – will not only understand, but interact with macromolecules as they really exist.

References

Articles and Books

Anderson A., Weng Z., (1999) VRDD: Applying virtual reality visualization to protein docking  and  design. Journal of Molecular Graphics and Modelling 17, 180–186.

Bryson, S. (1996) Virtual Reality in Scientific Visualization. Vol. 39, No. 5, Communications of the ACM.

Cruz-Neira, C., Leigh, J., Papka, M., Barnes, C., Cohen, S.M., Das, S., Engelmann, R., Hudson, R.,

Kim, J., Park, S., Lee, J., Choi, Y., and Jung, S. (2004) Development of a Gesture-Based Molecular Visualization Tool Based on Virtual Reality for Molecular Docking. Bull. Korean Chem. Soc. 2004, Vol. 25, No. 10.

Roy, T., Siegel, L., Vasilakis, C., DeFanti, T.A., Sandin, D.J. (1993) Scientists in Wonderland: A Report on Visualization Applications in the CAVE Virtual Reality Environment. IEEE       Electronic Visualization Laboratory, Department of Electrical Engineering and Computer    Science, University of Illinois at Chicago, Chicago, IL 60680.

Schulze, J. P., Kim, H.S., Weber, P., Prudhomme, A., Bohn, R.E., Seracini, M., DeFanti. (2011) Advanced Applications of Virtual Reality.  Advances in Computers, Vol. 82. Burlington:        Academic Press, 2011, pp. 217-260.

Schwede, T. (2013). Protein modeling: what happened to the “protein structure gap”? Cell Press Structure, 21(9): 1531-1540.

Stahl, M.T. (2005) Open-source software: not quite endsville. Drug Discovery Today, Volume 10, Issue 3, 1 February 2005, Pages 219–222

Beem, K.M., Richardson, D.C., Rajagopalan, K.V. (1977) Metal sites of copper-zinc superoxide dismutase, Biochemistry, 1977 May 3;16(9):1930-6.

Webpages

1. Kickstarter 2012, viewed 5. December 2013,

            <http://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game>

2. Oculus VR 2013, viewed 5. December 2013

            <https://developer.oculusvr.com/wiki/HardwareSpecs>

3. Digicortex, viewed 5. December 2013

<http://www.digicortex.net/>

4. Virtalis xx, viewed 5. December 2013

            <http://www.virtalis.com/academic-rad.php>

5. Ben Vanik, GitHub, viewed 5. December 2013

            <https://github.com/benvanik>

6. Hindriksen, V. Scientific Visualisation of Molecules Steam Computing, viewed 5. December 2013

            <http://streamcomputing.eu/blog/2012-10-31/scientific-visualisation-of-molecules/>

7. LaValle, S., The Latent Power of Prediction. Oculus VR, viewed 5. December 2013
<http://www.oculusvr.com/blog/the-latent-power-of-prediction/>
8. Vepo Lab, The Cave, viewed 5. December 2013

            <http://chpsw.temple.edu/chpsw/vepolab/index_files/Page588.htm>

9. Razer Hydra, PC Gaming Motion Sensing Controllers, viewed 5. December 2013

            <http://www.razerzone.com/gaming-controllers/razer-hydra/>

10.LeapMotion, The Leap Motion Controller,  viewed 5. December 2013

            <https://www.leapmotion.com/>

11. Martz, E., Francoeur, E., History of Visualization of Biological Macromolecules, Viewed 5.    December 2013 

            <http://www.umass.edu/microbio/rasmol/history.htm>