The Growth of Green Building Over the Years

Posted by on Dec 3, 2016 in Writing Assignment 6 | No Comments

Green building, a concept introduced only a few decades ago, has grown immensely over the past years. The need to develop environment-friendly habits during a time of high energy and material consumption has encouraged governments to create policies that mandate green building practice. As a result of this effort, there has been a rapid increase in green buildings and a rapid development of innovative green ideas.

The first state to set green building requirements for new public buildings was Washington in 2005. A bill signed by former Governor Christine Gregoire requires all new major public facilities that will exceed 5,000 square feet to meet LEED standards. New York has been another major advocator for green building. Former governor George Pataki signed Executive Order No. III, meant to establish energy standards for buildings, which has been built upon over the years. Like in Washington, all new public buildings in New York are required to meet LEED standards as a result. The order also required state agencies to reduce the energy consumption of their buildings from the year 1990 by 35% by 2010. In addition, state agencies in New York must only use ENERGY STAR equipment when buying new equipment (Erpenbeck & Schiman, 2010). The progress of green building throughout the years can be additionally measured by observing the growth of LEED, the largest green building rating system in the United States. As of 2007, the number of buildings using the LEED building assessment system increased from 0 to 3,000 and the number of certified green buildings has grown from 0 to 300. Table 1 further illustrates green building’s growth in the United States from 1999 to 2005 using data from LEED (Kibert & Grosskopf, 2007). In terms of area, the United States Green Building Council reported that 667,600 square feet was green certified in 2000 and has grown to 500 million square feet in 2010 (Kontokosta, 2011).

greenbuildinggrowth

Table 1: The Growth of Green Building from 1999 to 2005 in Terms of LEED Metrics, Source: Kibert, C., & Grosskopf, K. (2007). ENVISIONING NEXT-GENERATION GREEN BUILDINGS. Journal of Land Use & Environmental Law, 23(1), 145-160. Retrieved from http://www.jstor.org/stable/42842944

As a new concept, green building has many areas it can be improved upon. One of the major goals in sustainable building is to create structures that self-provide the energy they need. Known as Net Zero Energy Buildings (NZEBs), these buildings create the same amount of energy that they consume within a year. Although there have been buildings designed as net zero energy, none have fully met their intended levels of savings. In order to improve on the concept of Net Zero Energy Buildings, it is essential to be able to identify the strengths and weaknesses of current designs. The U.S. Department of Energy (DOE) recommends identifying their benefits through their utility bills. Additional research about energy storage is also necessary to help implement this idea in large building areas. By 2025, the DOE hopes to have developed an effective method that produces cost-effective net zero energy buildings (Pless & Torcellini, 2009). Furthermore, to improve on the incorporation of green building ideas, it is important to educate society, from the engineer to the occupant, on the subject. With greater knowledge on green building, engineers will be more likely to create effective designs that are environment-friendly. It is recommended that engineers always discuss sustainable methods in design review meetings. Occupants of sustainable buildings should be taught how to use and care for green technologies in order to use them to their highest potential (Berardi, 2012).

 

References

Berardi, U. (2012). Sustainability assessment in the construction sector: rating systems and rated buildings. Sustainable Development20(6), 411-424.

Erpenbeck, M., & Schiman, C. (2010). ENVIRONMENTAL LAW: THE PAST, PRESENT, AND FUTURE OF GREEN BUILDING. GPSolo, 27(2), 34-46. Retrieved from http://www.jstor.org/stable/23630127

Kibert, C., & Grosskopf, K. (2007). ENVISIONING NEXT-GENERATION GREEN BUILDINGS. Journal of Land Use & Environmental Law, 23(1), 145-160. Retrieved from http://www.jstor.org/stable/42842944

Kontokosta, C. (2011). Greening the regulatory landscape: the spatial and temporal diffusion of green building policies in US cities. Journal of Sustainable Real Estate3(1), 68-90.

Pless, S., & Paul Torcellini PhD, P. E. (2009). Getting to net zero. ASHRAE Journal, 51(9), 18.

Brief Literature Review of Pain Assessment

Posted by on Dec 3, 2016 in Writing Assignment 6 | No Comments

In his literature review of pain research, Bill Noble elaborates on the methods used to test effectiveness of analgesics between 1945 and 2000. His research outlines three different approaches: (1) psychophysics, (2) standardized words on questionnaires, and (3) verbal rating scales. Psychophysics is the oldest method of the three and it uses a stimulus to evoke pain (Hardy & Goodwell, 1940). The stimulus needed to elicit pain is quantified (pain threshold) and then subtracted from the maximum  amount of stimulus that the subject is able to handle (pain tolerance) to get the difference (pain interval). The second method has a more clinical use as it is a survey format that asks patients to describe their pain using a set a standardized questions. The first of these surveys was the McGill Questionnaire, which Scarry also references in her analysis of how the assessment of physical pain in a clinical setting has changed. The third method of measuring pain Noble addresses is a verbal scale where patients express their pain on a numerical scale (Figure 1). All three methods have a place in modern medicine. (Noble)

faces

Fig 1. Numerical scale used to quantify pain in people 3 years of age and older (Wong & Baker, 2001)

In their essay, Resnik and Rehm dissect why clinicians do not adequately address pain and what steps need to be taken in order to fix this issue. He points out that clinicians are firstly not given enough training on pain management. Secondly, they are often hesitant to prescribe analgesics because of possible side effects. Medical regulations also prohibit excess or unnecessary prescriptions of narcotics as they can lead to abuse of medication. Along with improper treatment of pain by the physician, inadequate communication of pain by the patient also contributes to the problem at hand. Patients sometimes hold back from expressing pain for various reasons (insurance does not always cover pain medication, pain can mean their illness is progressing, etc.) Resnik and Rehm suggest that retraining medical professionals in pain management would help to overcome this communication gap between physicians and patients. This encompasses using  subjective descriptions of pain by patients for diagnosis, alternative methods of treatment pain outside of commercial medicine, and more conversation regarding pain between the physicians and patient to normalize the subject. (Resnik & Rehm, 2001)

Evidently, research has shown that pain is difficult to expressive because of its subjectivity. However, in clinical medicine, standardized questionnaires and scales are used to quantify pain and diagnose patients. These diagnoses are sometimes inaccurate because of confounding variables such as legal pressures against prescribing analgesics and the communication gap between physicians and their patients.

The sources examined in this literature review examine pain assessment from a clinical perspective. While this is a large application of pain assessment, there is little research about the cultural differences that come into play when looking at how pain is perceived and treated. Additionally, there seem to be several standard measures of pain assessment in modern medicine. Having several standards is the equivalent of having no standard. Another area of research that is currently not well studied is the existence of a universal standard to assess pain. Further research should evaluate current models of pain assessment to establish the strengths and weaknesses of each established method used in medicine today.

Citations:

Hardy, J. D., Wolff, H. G., & Goodell, H. (1940). Studies on pain. A new method for measuring pain threshold: observations on spatial summation of pain. Journal of Clinical Investigation, 19(4), 649.

Noble, B., Clark, D., Meldrum, M., ten Have, H., Seymour, J., Winslow, M., & Paz, S. (2005). The measurement of pain, 1945–2000. Journal of pain and symptom management, 29(1), 14-21.

Resnik, D. B., & Rehm, M. (2001). The undertreatment of pain: scientific, clinical, cultural, and philosophical factors. Medicine, Health Care and Philosophy, 4(3), 277-288.

Scarry, E. (1985). The body in pain: The making and unmaking of the world. Oxford University Press, USA.

Wong, D. L., & Baker, C. M. (2001). Smiling face as anchor for pain intensity scales. Pain, 89(2-3), 295-297.

Searching for Search Algorithms in Yelp

Posted by on Dec 3, 2016 in Writing Assignment 6 | No Comments

According to Alexa.com, the most popular sites are search engines with Google.com as the leading provider on the World Wide Web. Practically, it is easy to see why this is so. The World Wide Web is an information system of documents that may be linked together with hyperlinks. These documents can be accessed via the Internet, which is a system of interconnected computer networks. To be able to access these documents, the address of the website needs to either be known or “Googled”. While Google didn’t pioneer search engines, it is indeed on the forefront of search engine technology. As the company is dominant in this industry, it become a verb, so to speak. As there are billions of documents on the internet, a search query would seemingly take ages. However, Google searches take less than a second on average as the time is listed whenever you make a search.

The whole concept of searches stems from information retrieval – “a field concerned with the structure, analysis, organization, storage, searching, and retrieval of information.” (Croft, et al.) Generally, there are many types of searches in Computer Science. The most simple of all is the linear search where you traverse an array linearly to compare the value of each index to find a target. Another interesting search is the binary search which is seen in the figure. When given a sorted array, simply divide and conquer by asking: is the target greater or smaller than the current index being looked at in the array? A decision tree is shown that will clarify the process for a binary search (Horowitz).

Figure 1: Binary Search Decision Tree

Figure 1: Binary Search Decision Tree (Horowitz)

Google is a crawler-based search engine in that it is divided into three tasks. First, there is a “spider” that goes through a website and all of the links in the directory of that web server. Second, these web pages are coped into an index that will allow for the final step where a search engine software will traverse the index to find pages that would satisfy a target (Sullivan). As discussed before, we know what a linear search and binary search is. A linear search simply could not suffice for billions of pages as your daily Google search will take hours and maybe even days. A binary search would also take very long. Suppose for a small data set, we used a binary search. Entering a search query would then go through this index of pages that point to a set of documents based on the keywords in the search query.

People that use Google for the most part only want a few pages out of the hundreds of thousands of documents returned. An estimated 85% of queries only had the first page of results requested (Henziner, et al.). As there are many results, it is given that users would only want a small subset to satisfy their request (Joachims). How does this relate to Yelp? Like any search engine, it utilizes an indexing system that would allow it to search for restaurants and businesses efficiently. If you did a linear search through the many businesses in Manhattan alone for Alice’s Tea Cup, the resources exhausted for merely one query would be unsustainable in a searching service. By indexing keywords, Yelp can easily return a list of restaurants with a simple search. Furthermore, Yelp’s discrete tags allow you to filter restaurants based on their accommodations and cuisine.

Works Cited

Croft, W. Bruce, Donald Metzler, and Trevor Strohmann. Search engines. Pearson Education, 2010.

Henzinger, Monika R., Rajeev Motwani, and Craig Silverstein. “Challenges in web search engines.ACM SIGIR Forum. Vol. 36. No. 2. ACM, 2002.

Horowitz, Ellis, and Sartaj Sahni. Fundamentals of computer algorithms. Computer Science Press, 1978.

Joachims, Thorsten. “Optimizing search engines using clickthrough data.Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2002.

Sullivan, Danny. “How search engines work.SEARCH ENGINE WATCH, at http://www. searchenginewatch. com/webmasters/work. html (last updated June 26, 2001)(on file with the New York University Journal of Legislation and Public Policy) (2002).

Fiducial Markers: An Important Tool for the Cyberknife’s Motion Tracking Capability

Posted by on Dec 3, 2016 in Writing Assignment 6 | No Comments

The past several assignments have explicitly focused on evaluating aspects of the Cyberknife system, such as the treatment method’s precision, efficacy, and cost-effect ratio.  What has yet to be explored among these assignments is an important tool that essentially enables the Cyberknfie to do its job and effectively treat patients with various types of cancers.  This important tool is known as a fiducial marker.

One of the main aspects of the Cyberknife system that makes it unique from other cancer treatment methods is that it can correct for patient motion while it is delivering radiation to a target cancer site.  What enables the Cyberknife to do this is the use of gold fiducial markers.  Past scientific literature in this field has predominantly focused on the use of fiducial markers for prostate cancer.  Fiducial markers are placed into patients’ prostates in either a transperineal or transrectal manner under the guidance of a transrectal ultrasound (Hellinger et al., 2015).  The Cyberknife is able to offer real time movement correction when delivering radiation by monitoring the positions of the fiducial markers with the use of a digital x-ray (Hellinger et al., 2015).

fiducial-markers

Fig 1: Fiducial markers in the prostate. The markers are inserted into the prostate (2 are visibly sparkling the center of the figure) and then used by the X-ray portion of the Cyberknife to track and correct for position changes made by patients undergoing treatment. (Hellinger et al., 2015)

Variations on fiducial markers have been produced over the years as a means of developing a more accurate way of localizing the prostate so that treatment from the Cyberknife can be delivered more precisely.  A few of these variations include round markers, cylindrical markers, and elongated markers (Boer et al., 2012).  Research conducted in 2012 specifically sought to evaluate the number of elongated fiducial markers that would produce the most accurate localization of the prostate during treatment (Boer et al., 2012). Using either 1, 2 or 3 markers for 24 patients, the researchers saw that placing 2 markers, one on each side of the prostate, was able to accurately locate the prostate; using 1 produced a larger position tracking error and using 3 produced a tracking error (0.3-0.8mm) similar to 2 markers (0.4-1mm) (Boer et al., 2012).  Furthermore, research conducted in 2008 suggested that fiducial markers enable the Cyberknife x-ray to keep the prostate in tracking range at approximately 40 seconds between consecutive x-rays (Xie et al., 2008).

Additional research has been conducted not only to understand the capacity of fiducial markers and their impact on prostate movement tracking, but on improving the Cyberknife’s ability to track prostate motion.  A study published in 2011 specifically focused on acquiring a six-dimensional prostate motion tracking system (Lei et al., 2011).  In contrast to research by Boer mentioned above, 4 fiducial markers were placed at least 2 cm apart in the prostate in order to obtain 6D motion tracking for Lei’s patient sample (Lei et al., 2011).   In 98% of the 88 patients undergoing the Cyberknife system with the use of fiducial markers, at least one fiducial marker met the criteria for 6D correction of prostate motion (Lei et al., 2011).  By obtaining 6D motion tracking, a higher dose of radiation could be imposed on a patient to ultimately reduce the number of fractions he would have to undergo in order to treat his cancer.

One incredibly important factor to consider when it comes to any form of treatment for cancer are the complications that may result from it.  In addition to exploring 6D motion tracking, Lei’s article reported that patients did not experience complications higher than grade 2 (Lei et al., 2011).  Additional research reviewed 270 fiducial markers placed in 77 patients; 31 of these implants were placed into the prostate (Kim et al., 2012).  21% of the patients experienced minor complication rates either before and during treatment; 1% experienced severe complications (Kim et al., 2012).  2.2% of fiducial markers migrated from initial implantation sites. 96.7% of the implants were considered successful (Kim et al., 2012).  In another study by Gill, the authors noted that out of the 234 patients assessed, 32% reported at least one new symptom post treatment (Gill et al., 212).  However, most of the symptoms presented were considered either grade 1 or grade 2, such as urinary frequency and mild rectal bleeding, and often lasted less than 2 weeks (Gill et al.,2012).

Overall, this paper focused on evaluating the importance of fiducial markers for the use of the Cyberknife system for prostate cancer patients.  The articles provided information on different types of fiducial markers, how they aid in motion tracking, ways of improving motion tracking with their use, and ultimate safety and efficacy were all assessed.  It appeared that the fiducial markers had a high success rate for 6D correction and often did not result in incredible severe complications post treatment.  Despite this, it is important to note that 32% of patients undergoing radiation therapy paired with fiducial markers experienced some grade level of complication as a result of the fiducial markers and treatment.

 

References

Boer, J. D., Herk, M. V., & Sonke, J. (2012). The Influence Of The Number Of Implanted Fiducial Markers On The Localization Accuracy Of The Prostate [Abstract]. IOP Science, 57(19).

Gill, S., Li, J., Thomas, J., Bressel, M., Thursky, K., Styles, C., Foroudi, F. (2012). Patient-reported complications from fiducial marker implantation for prostate image-guided radiotherapy. The British Journal of Radiology, 85(1015), 1011-1017.

Hellinger, J. C., Blacksberg, S., Haas, J., & Melnick, J. (2015, September). Interventional uroradiology in the management of prostate cancer. Applied Radiology, 40-41.

Kim, J. H., Hong, S. S., Kim, J. H., Park, H. J., Chang, Y., Chang, A. R., & Kwon, S. (2012). Safety and Efficacy of Ultrasound-Guided Fiducial Marker Implantation for CyberKnife Radiation Therapy. Korean Journal of Radiology, 13(3), 307-313.

Lei, S., Piel, N., Oermann, E. K., Chen, V., Ju, A. W., Dahal, K. N., Collins, S. P. (2011). Six-Dimensional Correction of Intra-Fractional Prostate Motion with CyberKnife Stereotactic Body Radiation Therapy. Frontiers in Oncology, 1, 1-12.

Xie, Y., Djajaputra, D., King, C. R., Hossain, S., Ma, L., & Xing, L. (2008). Intrafractional Motion of the Prostate During Hypofractionated Radiotherapy. International Journal of Radiation Oncology*Biology*Physics, 72(1), 236-246.

 

A Comparison Between Fourth Dimensional Geometry and Cubist Art Forms

Posted by on Dec 3, 2016 in Writing Assignment 6 | No Comments

The world around us exists with three different spatial dimensions.  However, from as early as the 1880’s, mathematicians have pondered the existence of a fourth spatial dimension, and even higher dimensions beyond that [1]. Yet, living in the third dimension, we are not able to perceive object in hyperspace.  In order to view such an object, we can use a slicing method.  For example, a sphere passing through a plane would appear, for something in the second dimension, to start as a point, grow larger as a circle, then eventually shrink back down to a point before it vanishes.  Using this same method, we can view a fourth dimension object as various changing three dimensional figures, as shown in Figure 1 [2].

slices

Figure 1. (Top) A cube passing through a plane (Bottom) A hypercube passing through the third dimension [2].

This slicing method works only to partially describe a surface.  Figure 2 shows the various ways a square would pass through a plane, which is dependent on orientation.  Furthermore, viewing these slices separately does not give a clear picture of what the surface actually looks like.  Instead, we can use projection to see how a surface like this would exist.  Just as one can draw a cube onto a piece of paper by distorting the lengths of edges to give appearance of depth, we can project a hyper surface to the third dimension [1] Finally, perspective can also be used to render higher dimensions, just as Henri Poincaré suggested in 1902 [2].

screen-shot-2016-12-03-at-12-45-38-am

Figure 2. Different orientations of a cube cast different images in the second dimension [1].

tesseractprojection_700

Figure 3. A projection of a hypercube, also known as a tesseract.

One art form that is heavily influenced by perspective, or the rejection of such, is cubism.  Cubist work relies on the use of the fourth dimension and its lack of perspective, as Guillaume Apollinaire states perspective is “that fourth dimension in reverse” [1].  Pablo Picasso is one of the prominent artists in the cubism movement, utilizing multiple perspectives, such as one technique in which Picasso layered two photo negatives, synthesizing four paintings in one photograph [3]. Because of the reliance of fourth dimensional geometry, parallels can be seen between renderings of hyper surfaces and cubist works, in some cases by sight alone as seen in Figure 4.

parallel

Figure 4. (Left) An example of cubist art by Juan Gris. (Right) Multiple perspectives of octahedra by E. Jouffret. Similarities can be seen between these two images.

Cubism is not the only art form that uses this fourth dimension.  Mathematician and artist Tony Robbin utilizes various properties of the fourth dimension to create his artwork [4].  One example of his use of higher dimensions in his work is in the braiding of sheets.  As he explains, braiding threads starts with one dimensional lines, which then go over an under one another, requiring the use of three dimensions.  Braiding sheets, therefore, requires jumping up two dimensions as well, utilizing this fourth dimension.  Figure 5 uses five of these sheets to create one painting.  By use of the mathematical fourth dimension, artist were, and continue to be, able to create new works of art, unbound from the world we live in, instead focusing on one we cannot see.

screen-shot-2016-12-03-at-12-52-50-am

Figure 5. One of Tony Robbin’s paintings, 2006-6.

References:

[1]  Henderson, Linda Dalrymple. “The Image and Imagination of the Fourth Dimension in Twentieth-Century Art and Culture.” Configurations 17.1 (2009): 131-160.

[2]  Bodish, Elijah. “Cubism And The Fourth Dimension.” Montana Mathematics Enthusiast 6.3 (2009): 527-540. Academic Search Complete. Web. 3 Dec. 2016.

[3]  Ambrosio, Chiara. “Cubism And The Fourth Dimension.” Interdisciplinary Science Reviews 41.2/3 (2016): 202-221. Academic Search Complete. Web. 3 Dec. 2016.

[4]  Robbin, Tony. 2015. “Topology and the Visualization of Space.” Symmetry 7, no. 1: 32-39.

[5]  Henderson, Linda Dalrymple. “The Fourth Dimension and Non-Euclidean Geometry in Modern Art: Conclusion.” Leonardo, vol. 17, no. 3, 1984, pp. 205–210. www.jstor.org/stable/1575193.

Suspended Animation – Inducing Hibernation in Humans to Aid Space Travel

Posted by on Dec 3, 2016 in Writing Assignment 6 | No Comments

As previously detailed here, lengthy space expeditions have an overwhelmingly negative effect on the physiological and social well-being of a human. Conscious astronauts need entertainment, human interaction, and nourishment. If we were somehow able to slow down the human metabolism and induce them into some sort of hibernation, both expedition costs and psychological stressors would be dramatically reduced.There are different categorizations of hypometabolism – e.g. hibernation, torpor, and winter sleep – and since biologists regularly argue about their usage, I will consider them as similar enough concepts to be synonymous (Malatesta et al. 2007).

Figure 1, presented by Ayre et al., details the main stressors of space travel and the effect that hibernation would have on each stressor.

Figure 1. “Interactions Between Hibernation and Space Environment Stressors” retrieved from Ayre et al. 2004

The only stressors made worse by suspending the astronauts can be alleviated or even solved by a system of onboard gravity. This will make the hibernation effectively a prolonged bedrest (Ayre et al. 2004). Cockett et al. uphold that putting humans into a hypothermic state increases resistance to “shock in dysbarism, bacteremia, trauma, and excessive g-forces” (Cockett et al. 1962).

Experimental trials have been conducted on mice using H2S. Mice exposed to 80ppm of H2S reduced their oxygen consumption by 50% and their carbon dioxide output by 60% within the first five minutes of constant exposure. Leaving them in this environment for six hours caused their metabolic rate to drop by 90% and did not lead to any permanent harmful conditions once removed. These results were very similar to the beginning phases of animal hibernation (Blackstone et al. 2009).

These results showed great promise but were not reproducible in larger animals. There is consequently no long-term experimental method of inducing hibernation in humans. NASA wants to utilize hibernation modules for a manned mission to Mars. They outlined 3 possible methods in a 2014 presentation: lowering the temperature of the body by IV fluids, gel pads, or evaporative gases; using drugs similar to H2S inducing hibernation in mice; and reducing the number of dendrites in certain brain cells. The goal for this mission is shown below in Figure 2 as presented by Spaceworks Inc.:

Figure 2. Spaceworks Inc. hibernation goal for Manned Mars Mission, from Bradford et al. 2014

Figure 2. Spaceworks Inc. hibernation goal for manned Mars mission retrieved from Bradford et al. 2014

NASA’s design for the hibernation chambers of the crew greatly reduces the amount of space needed to sustain the expedition and consequently reduces the size and weight of the spacecraft by a projected 78% and 52% respectively. These reductions can, as outlined earlier, allow for more advanced and larger systems elsewhere on the ship (Bradford et al. 2014).

Ayre et al. further propose the future use of gene therapy and CRISPR editing to create humans with features more conducive to hibernation. Humans can be modified to maintain certain types of more efficient fat storage only present in babies, and with the ability to hibernate astronauts would only need to bulk up muscle and fat mass prior to a mission. It is very possible that future astronaut candidates will be chosen before they are even born (Ayre et al. 2004).

 

References:

Blackstone E, Morrison M, Roth MB. 2009. H2S Induces a Suspended Animation-Like State in Mice 518

Bradford JE, Talk D. 2014. Torpor Inducing Transfer Habitat for Human Stasis to Mars 1:42

Ayre M, Zancanaro C, Malatesta M. 2004. Morpheus – Hypometabolic Stasis in Humans for Long Term Space Flight 1:15

Malatesta M, Miggiogera M, Zancanaro C. 2007. Hypometabolic induced state: a potential tool in biomedicine and space exploration 6:47

Cockett TK, Beehler CC. 1962. Protective Effects of Hypothermia in Exploration of Space Abstract

Physiological Threats to Mental Health in Long-Term Space Mission

Posted by on Dec 2, 2016 in Writing Assignment 6 | No Comments

The previous writing assignment covered current research on the psychological well-being of astronauts in a proposed long-term space mission. Social and behavioral factors like stress, mood-states, and sleep deprivation were the main threats to the mental health of astronauts. However, another side to this discussion includes the physical and medical effects on mental health from ever-present physiological dangers.

On Earth, CO2 typically constitutes 0.03% of air by volume. On space habitats like the ISS, ventilation of air and air composition are controlled, and are not always perfect. CO2 levels on the ISS are about 0.5%(+/-0.2%), following NASA’s Spacecraft Maximum Allowable Concentration of 0.7% CO2. However, larger variations in CO2 concentration, from poor air-flow in certain regions of the ISS to unexpected increases as a result of exercise or a larger congregation of astronauts in one area, can pose an actual risk (Stankovic, 2016).

In one study, 22 participants were given cognitive tests under 3 levels of CO2 concentration, 0.06% (600ppm), 0.1% (1000 ppm) and 0.25% (25,000 ppm). Figure 1 shows the results of the study in 9 graphs, each focusing on a component of the tests. In “task orientation,” “initiative,” “basic strategy,” and “breadth of approach,” there are clear deviations in the performance of the group under the 3 levels of CO2 concentration (Satish et al., 2012).

Figure 1: Nine graphs displaying the data of the 22-participant cognitive test, with the three CO2 levels in different colors and points, participant on the x-axis, and score on the y-axis. Each of the nine graphs focuses on a component of the tests.

Figure 1: Nine graphs displaying the data of the 22-participant cognitive test, with the three CO2 levels in different colors and points, participant on the x-axis, and score on the y-axis. Each of the nine graphs focuses on a component of the tests.

Though CO2 concentration poses a recognizably legitimate danger in higher-than-normal levels, less research has been done on whether microgravity has a direct affect on the cognition of astronauts. However, one study compared the hippocampal CA1 neurons of rats before and after exposure to 14 days of simulated microgravity (Ranjan et. al, 2014). The study found that the mean area, perimeter, synaptic cleft, and length of the CA1 neurons in the rats all significantly decreased after the simulated microgravity. It concluded that these deteriorations could have a great effect on the learning and memory ability of astronauts, two key cognitive qualities of being an efficient astronaut.

Figure 2: A diagram showing the relationship between radiation exposure during a space mission and the subsequent psychological risk. BHP stands for NASA's Behavioral Health and Performance program.

Figure 2: A diagram showing the relationship between radiation exposure during a space mission and the subsequent psychological risk. BHP stands for NASA’s Behavioral Health and Performance program.

Another less-thought-of cause for ill mental health in space, and with less astronaut-based research, is radiation. Figure 2 shows an indirect path from small exposures to radiation to nervous system damage and effects on behavioral health (Slack, 2016).

One study exposed Fischer 344 and Lewis rats to 25 and 100 cGy of proton radiation, performed rPVT tests for rodent cognition and memory, and analyzed certain proteins of their brains after 9 months. The study found that although the the rats exposed to the two levels of proton radiation demonstrated lower accuracy and higher impulsive responding, they did not have rPVT scores that were statistically different from the control group. However, the rats exposed to radiation had significantly smaller frontal cortex proteins and cytokine arrays than the control group, and the baseline dopaminergic functions of the rats were responsible for recovery from the radiation damage (Davis et al., 2015).


Works Cited

Davis, Catherine M., Kathleen L. DeCicco-Skinner, Robert D. Heinz. “Deficits in Sustained Attention and Changes in Dopaminergic Protein Levels following Exposure to Proton Radiation Are Related to Basal Dopaminergic Function.PLOS ONE. 10, no. 12 (December, 2015) [Cited 20 November 2016].

Ranjan, Amit, Jitendra Behari, Birenda N. Mallick. “Cytomorphometric changes in hippocampal CA1 neurons exposed to simulated microgravity using rats as model.Frontiers in Neurology. 5 (May, 2014) [Cited 20 November 2016].

Satish, Usha, Mark J. Mendell, Krishnamurthy Shekar, et al. “Is CO2 an Indoor Pollutant? Direct Effects of Low-to-Moderate CO2 Concentrations on Human Decision-Making Performance.Environmental Health Perspectives. 120, no. 12 (December, 2012) [Cited 20 November 2016].

Slack, Kelley J., Thomas J. Williams, Jason S. Schneiderman, et al. “Evidence Report: Risk of Adverse Cognitive or Behavioral
Conditions and Psychiatric Disorders.” (19) National Aeronautics and Space Administration. (April, 2016) [Cited 20 November 2016]

Stankovic, Aleksandra, David Alexander, Charles M. Oman, et al. “A Review of Cognitive and Behavioral Effects of
Increased Carbon Dioxide Exposure in Humans.” (2-3) National Aeronautics and Space Administration. (August, 2016) [Cited 20 November 2016]

The Evergrowing Popularity of eSports

Posted by on Dec 2, 2016 in Writing Assignment 6 | No Comments

As a reminder, the 2013 League of Legends Season 3 World Championship had 32 million total viewers, while the 2013 NBA Finals Game 7 had only 26.3 million viewers (Hollist, 2015). Surprising as it is, it’s true that more people prefer watching computer games than basketball. There are many reasons that can explain the reasoning behind this.

Although eSports is the noun for competitive gaming, it is still a form of video game. Video games are often played as a form of escape – escape from real-world problems and real-life responsibilities. Although some games let you meet new friends and express your true self, the case for competitive games is different. “Different from collaborative virtual worlds environments, escapism in eSports is not about the social experience of slipping into avatars’ roles and becoming the virtual ‘other’ individuals would like to be; as a competitive activity, eSports… escapism is about gathering the capabilities of highly skilled avatars while immersing into the competitive virtual world in order to gain competitive advantage, which is an instrument that leads to power in the virtual. Consequently, individuals expose their true self through the way they behave in competitive virtual worlds acting as their virtual-self” (Weiss, 2013).

esports2

Figure 1: Collegiate eSports compared to collegiate basketball (Gregory, 2015)

In addition, height and weight of eSports players have little to no effect on performance. In any sport, you must be physically gifted to excel, such as being 6’5’’ in basketball to slam dunk. In eSports, literally anyone can become a professional with enough practice. As shown in Figure 1, the average collegiate gamer is exactly the average height and weight of a male, whereas the average NCAA player is 6’9’’ 229 lbs – a genetic gift from his parents. The phrase “practice makes perfect” applies to just about everyone in eSports.

With that said, the ability to become an eSports gamer is available to anyone as well. Aside from joining your collegiate team, there are several eSports sites that host their own sponsored online tournaments. “Competitive gaming is becoming easier to try for yourself. In June, Gfinity launched a new website, Gfinity.net, for people to do just that. It functions like a social network for gamers, staging online competitions daily and awarding £30,000 in prize money each month. ‘We provide a route into professional gaming if that’s what you want,’ says Wyatt. The company also aims to run large-scale gaming tournaments in sporting arenas every couple of months. ‘If it continues growing at the same rate, events like G3 will be the norm,’ says Wyatt” (Heaven, 2014). Anyone with the motivation can join these tournaments, which are great opportunities to gain exposure if you perform well.

Remember that the eSports industry is huge. “According to the 2008 Entertainment Software Association (ESA) report, nearly 270 million computers and video game consoles were sold within the US, generating close to $10 billion in 2007, and it is estimated that video games are a $20 billion industry in the US alone. The eSports industry is also booming in other countries like South Korea in that professional gaming teams have corporate sponsors (e.g., Samsung) and tens of thousands of spectators gather and cheer for their favorite teams to win… Although these numbers do not provide precise information in terms of how much of the entire game industry is specifically about eSports, it is clear that this emerging market segment produces billions of dollars and contributes economically to the growth of the sport industry as a whole” (Lee, 2011).

References

Gregory, S. (2015, April 6). Virtual World, Varsity Sport. Time, 185(12), 44-47.

Heaven, D. (2014, August 16). Rise and rise of esports. New Scientist, 223(2982), 17.

Hollist, K. E. (2015). TIME TO BE GROWN-UPS ABOUT VIDEO GAMING: THE RISING ESPORTS INDUSTRY AND THE NEED FOR REGULATION. ARIZONA LAW REVIEW, 57(3), 823-847.

Lee, D., & Schoenstedt, L. J. (2011, Fall). Comparison of eSports and Traditional Sports Consumption Motives [Abstract]. He ICHPER-SD Journal of Research in Health, Physical Education, Recreation, Sport & Dance, 6(2), 39-44.

Weiss, T., & Schiele, S. (2013, April 20). Virtual worlds in competitive contexts: Analyzing eSports consumer needs. Electron Markets, 23, 307-316.

Gap Junctions: The complex bridge of the body

Posted by on Dec 1, 2016 in Writing Assignment 6 | No Comments

Gap junctions are a crucial component of cell to cell communication that connects cells and can transport a multitude of products. They can be found in all different types of body cells and assist with a variety of functions

Gap junctions have been found to be present in a variety of bone cells (Doty, 1981). Gap junctions, as intercellular bridges, connected adjacent bone cells and suggest they help in the control/coordination of bone cell activity. In most cells, gap junctions connect the cytoplasm of two cells and can transfer hydrophilic molecules. However, if an immune system response surfaces, rat cells can shut off these gap junctions in order to minimize the spread of a foreign substance (Fraser, 1987).

Gap junctions are even used in cells related to hormone distribution. Thyroid cells were found to reconstruct gap junctions in response to the hormone TSH. However, when the protein Kinase-C was activated, the functional activity of the gap junctions reacted negatively (Munari-Silem, 2009). Showing how different aspects of cell communication can intertwine, gap junctions can be affected by hormones. In fact, gap junctions can have different selectivity because the connexins subunits that form gap junctions can be mixed and matched, leading to a whole realm of complexity on what passes through the junctions (Kumar, 1996).

Model of a pore in a gap junction. The pore is constructed by the different connexin sub units.

(Kumar,1996) Model of a pore in a gap junction. The pore is constructed by the different connexin sub units.

Sources

Doty, Stephen B. “Morphological Evidence of Gap Junctions between Bone Cells.” Calcified Tissue International 33.1 (1981): 509-12.

Fraser, S., C. Green, H. Bode, and N. Gilula. “Selective Disruption of Gap Junctional Communication Interferes with a Patterning Process in Hydra.” Science 237.4810 (1987): 49-55.

Munari-Silem, Yvonne, Christine Audebet, and Bernard Rousset. “Hormonal Control of Cell to Cell Communication: Regulation by Thyrotropin of the Gap Junction-Mediated Dye Transfer between Thyroid Cells.” Endocrinology 128.6 (1991): 3299-309.

Kumar, Nalin M., and Norton B. Gilula. “The Gap Junction Communication Channel.” Cell84.3 (1996): 381-88.

Different Theories That Attempt To Describe and Explain The Universe

Posted by on Dec 1, 2016 in Writing Assignment 6 | No Comments

Many scientists have attempted to explain the universe we reside in through many different theories. None of them are absolute of course, but some tend to be more believable than others. While there is no definite evidence that fully supports any single theory, based on what we see so far, we can only assume that one is correct depending on how the theory’s intricacies match up to and are in accordance with mathematical equations that describe the governing laws of physics. Many theories and models can be explored, with the prevalent ones being the multiverse theory, Quantum Field Theory, the Anisotropic model, and the currently accepted, Big Bang Theory.

One of these theories consists of an anisotropic model of the universe. This model explores a role “in the study of cosmic highly excited strings in the early universe” (Sepehri et al., 2015). These strings mentioned became an important part to this theory because they were supposedly created during the phase transition after the Big Bang explosion, with the temperature lowering, with them “then decay[ing] to standard model particles at the Hagedorn temperature” (Sepeheri et al., 2015). Essentially the theory shows that vector string tachyons, a big rip singularity, control the expansion of the anisotropic universe, whilst shifting from the non-phantom phase to the phantom phase with the phantom-dominated era of the universe accelerating and ending up in a big rip singularity (Sepehri et al., 2015).

Another theory, Quantum Field Theory, asserts that quantum fields propagate on a classical background, defining quantum phenomena “in a regime where the quantum effects of gravity do not play a dominant role, but the effects of curved spacetime may be significant” (Tavakoli and Fabris, 2015). This theory of quantum fields becomes invalid in classical curved spacetime, with regimes arbitrarily close to the classical singularities. Here the spacetime curvatures become extremely small, relative on Planckian scales and so the quantum effects of gravity are no longer negligible (Tavakoli and Fabris, 2015). This theory is explained through many complex mathematical equations and finds some foothold as a plausible theory explaining the creation of particles in a cyclic universe.

A microgravity environment for the central nervous system allows us to explore the beginnings of mankind, in a purely theoretical sense. While this article doesn’t primarily discuss the origins of the universe, it relates to the beginnings of mankind and draws a connection to the universe. The connection is made with simply the limbic system, with connections between the brainwaves, oscillations and our soul, with the soul being our origin and the greater limbic system being the seat of the soul. The article asserts that everything moves in a wave-like pattern, where everything is oscillating, and this idea is related to parts of the human bodies that create wave-like oscillations, such as “brain waves, heart rate, blood pulsation, and pressure, respiration, peristalsis for most living creatures and oscillations or waves for the whole of the universes contents” (Idris, 2014). These relations highlight the basis of this theory, which has more to do with similarities as opposed to mathematical logic and proofs.

One of the better-known theories proposed is the theory of the multiple universes, in which an infinite number of universes exist that accommodate all possible scenario of events, called the multiverse theory. The theory presents a “many-worlds view, in which all possible outcomes of a quantum measurement are always actualized, in the different parallel worlds, and a one-world view, in which a quantum measurement can only give rise to a single outcome” (Aerts & Bianchi, 2014). This is made possible by many quantum measurements happening frequently, thus allowing for multiple pictures. This theory draws some basis from the equations from quantum theory that describe waves, however the multiverse theory assumes an illusion of just one image being created by the results of quantum theory (Vaidman, 2015).

Currently there are many theories and attempts being made to describe the universe, but they are immensely difficult to explain and involve many intricacies. Even with all the specificities of each theory, most fall short in some aspect and due to the lack of complete knowledge, we cannot fully accept a theory. The Big Bang Theory explains many of the phenomena we have come to known and understand and explains them well according to our knowledge thus far, but we cannot fully accept it yet. For the time being however, it is the currently accepted theory.

 

Figure 1. Numerical solution for the scale factor of the universe represented as a graph. Oscillatory behavior is shown for the scale factor in the whole evolution of the universe (Tavakoli & Fabris, 2015).

Figure 1. Numerical solution for the scale factor of the universe represented as a graph. Oscillatory behavior is shown for the scale factor in the whole evolution of the universe (Tavakoli & Fabris, 2015).

Works Cited

Aerts, Diederik, and Massimiliano Sassoli de Bianchi. “Many-Measurements Or Many-Worlds? A Dialogue.” Foundations Of Science 20.4 (2015): 399-427.

Idris, Zamzuri. “Searching For The Origin Through Central Nervous System: A Review And Thought Which Related To Microgravity, Evolution, Big Bang Theory And Universes, Soul And Brainwaves, Greater Limbic System And Seat Of The Soul.” Malaysian Journal Of Medical Sciences 21.4 (2014): 4-11.

Sepehri, Alireza, Anirudh Pradhan, and Hassan Amirhashchi. “Removing The Big Rip Singularity From Anisotropic Universe In Super String Theory.” Canadian Journal Of Physics 93.11 (2015): 1324-1329.

Tavakoli, Yaser, and Júlio C. Fabris. “Creation Of Particles In A Cyclic Universe Driven By Loop Quantum Cosmology.” International Journal Of Modern Physics D: Gravitation, Astrophysics & Cosmology 24.8 (2015): -1.

Vaidman, Lev. “The Emergent Multiverse: Quantum Theory According To The Everett Interpretation.” British Journal For The Philosophy Of Science 66.2 (2015): 465-468.