Net Zero Energy Building: An Overview of Its Progress

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

One of the major aspects of green building is creating a structure that is energy efficient. Attaining a net-zero energy building, defined as a building which creates an equal amount of energy on site as what it consumes, is a heavily desired goal at the time.Figure 1 illustrates the features of a typical net zero commercial building.  With the increasing use of energy as a result of a growing building sector, energy saving has become more relevant in today’s society.

netzerobuilding

Figure 1: Features of a Net-Zero Commercial Building, Source: Efficiency Vermont

Despite the attempt to reduce energy consumption in the building sector, a large number of buildings have failed to achieve the desired energy savings. In a study conducted to evaluate the performance constructed sustainable buildings, only 1% of the 490 studied buildings were able to successfully produce 20% of energy from the renewable sources. On average, these buildings received 38% of the available points in the LEED category energy and atmosphere (Berardi, 2012). This is largely due to the fact that net-zero energy is a new concept in the building world. In response to its unsatisfying results, green building supporters are working to improve the methods used to implement this idea. When designing a zero-net energy building it is important to ensure the amount of energy used to produce renewable energy is insignificant (Hudson, 2014). In some situations, the purpose of saving energy is defeated because of this.  Another way to improve the zero-energy is concept is by analyzing the utility bills of buildings to identify where energy problems exist (Pless & Torcellini, 2008). Once the locations of failure are identified, the building designs can be modified and improved upon.

To encourage the reduction of energy, the government has set green building requirements. In 2007, President Bush signed a bill aiming to reduce the energy consumption of federal buildings. The bill requires that all new and renovated federal buildings be fossil fuel free and that a roadmap for the construction of net-zero energy buildings be established by 2030 (Colker, 2008). The Department of Energy hopes to develop a method that successfully creates cost-effective net energy buildings by 2025 (Pless & Torcellini, 2008). In addition to enforced policies, it is essential to educate people on zero-energy buildings and sustainable building. The American Society of Heating, Refrigerating and Air-Conditioning engineers has created a three-part Energy Efficiency Guide targeted at existing commercial buildings owners. These books explain to owners why they should cut their energy levels, how they should do it, and how to keep their building running efficiently after incorporating green technologies (Holness, 2011). A reason as to why green buildings are not performing as expected is because their occupants do not know how to effectively make use of green technologies. Through policies and incentives, the development of better net-zero energy buildings can be stimulated.

Based on green building performance evaluations, net-zero energy has still not been successfully achieved. Currently, green building advocators are working on developing better methods and designs to reach this goal. Considering its increasing relevance, it is expected that we will see improved designs of net-zero energy buildings within the next few years.

 

References

Berardi, U. (2012). Sustainability assessment in the construction sector: rating systems and rated buildings. Sustainable Development, 20(6), 411-424.

Colker, R. M. (2008). Federal roadmap for net-zero. ASHRAE Journal, 50(2), 53. Retrieved from http://go.galegroup.com/

Holness, G. V. R. (2011). On the path to net zero: how do we get there from here? ASHRAE Journal, 53(6), 50+. Retrieved from http://go.galegroup.com/

Hudson, S. (2014, August). Zero-net energy buildings are game changers in green engineering. Design News, 69(8), 14. Retrieved from http://go.galegroup.com/

Pless, S., & Paul Torcellini PhD, P. E. (2009). Getting to net zero. ASHRAE Journal51(9), 18.

Using Narrative Medicine to Explore the Social Roots of Illness

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

Andrew Herxheimer and Ann McPherson, clinical pharmacologist and general physician, respectively, initiated the DIPEx Project– a database of individual experiences from patients that received a hospital treatment. The database is divided into module names including accounts particular to each module. For example, some of the modules included chronic pain, breast screening, ovarian cancer, people with dementia and their fears. The experiences shared on DIPEx originated from interviews conducted orally as is custom in medical sociology (Lawton, 2003) where the participant was encouraged to share their story without being interrupted. (Herxheimer & Ziebland 2004)

This means of using narrative to explore illness ties into Harvard Psychiatrist, Kleinman’s argument that differentiating between “illness experience” and “disease experience” allows us to separate experiences of symptoms and suffering from physiological changes. While both aspects are crucial to understanding the full extent of the ailment, disease experience alone does not allow us to diagnose ailments efficiently (Kleinman, 1988). Paul Farmer, in his book, goes into the social roots of illness. He claims that the largest epidemic we face is poverty. Poverty is what forces people to live in small unsanitary places with low ventilation. Poverty forces increased crime rates because people steal resources because they are unaffordable, but necessary for survival. Increased crime results in unsafe living spaces. Poverty dictates our food choices, which come back to affect overall health.

Figure 1. Kleinman’s book, The Illness Narratives

Kleinman picks it up and reveals that the causes for pain and suffering are the same institutions that distort the accounts of pain that are expressed. Pain is a manifestation of a much larger idea, that we experience pain through our surroundings and the influences of day to day life greatly impact our well beings. There is violence in all aspects of our life and this constant violence influences how well our bodies function. Violence can come from images, daily stress, and fear/hatred. This can cause pain and suffering to our bodies, leaving us ill and weak. (Kleinman, 2000)

Citations:

Farmer, P. (2001). Infections and inequalities: The modern plagues. Univ of California Press.

Herxheimer, A., & Ziebland, S. (2004). The DIPEx project: collecting personal experiences of illness and health care. Narrative research in health and illness, 115-131.

Kleinman, A. (1988). The illness narratives: Suffering, healing, and the human condition. Basic books.

Kleinman, A. (2000). The violences of everyday life. Violence and subjectivity, 226-241.

Julia, L. (2003). Lay experiences of health and illness: past research and future agendas. Sociology of health & illness, 25(3), 23-40.

Gamification of Yelp

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

Yelp has built an entire database off of crowdsourcing information from users. Without the people, Yelp would never have been as successful as it is now. While it may seem too true for the users to do the job for Yelp for free, it is the reality of Yelp’s success. This statement is partially incorrect as the Yelpers don’t do it for free. Nevertheless, they are not being paid with money but rather satisfaction. A user has an incentive to build a reputation in the Yelp community because it almost feels like a game. In a Massive Multiplayer Online Roleplaying Game, millions of players that are attracted to this genre spend lots of time in their daily lives to better their character by any means allowed in a game. Be it a weapon that gives you more attack damage or a boots that make increase your intelligence, players constantly farm gold and experience points to get stronger. While it may seem like this behavior would be limited to games that would feel rewarding due to the fact that games are simply fun, businesses found a way to incentivize users with their own “game”.

About 70% of users on the Internet are apart of some social network (Kachniewska). The outreach and influence that social media websites have is immense. They can use this to their advantage by getting a better grasp of their users by captivating them with rewards. Just like increasing stats on your armor in World of Warcraft, increasing the amount of followers you have on Yelp feels as great. Games have been around in almost every culture until ancient times. As weird as it may seem, researchers have started to find that we are “hardwired” to play (Zichermann and Cunningham).

There are many elements that can make up a game that can also be part of gamification. “Self-representation with avatars; three-dimensional environments; narrative context; feedback; reputations, ranks, and levels; marketplaces and economies; competition under rules that are explicit and enforced; teams; parallel communication systems that can be easily configured; time pressure.” (Deterding, et al.) In figure 1, we can see that in order to implement different aspects of a game in a product, it needs to uphold the same principles of each level.

Figure 1: Levels of Gamificaiton

Figure 1: Levels of Gamification (Deterding, et al.)

In Yelp, there are some aspects of gamification. Reviews are publicly displayed to those looking at the business. The number of friends are listed as well as the amount of reviews for each reviewer. Furthermore, the reviews can also be rated by other Yelpers who think the information was helpful. Lastly, users can also receive compliments for the quality of their reviews (Pellikka). It has been shown that video games can change how someone behaves however the game intends (Deterding, et al.).

Works Cited

Deterding, Sebastian, et al. “Gamification. using game-design elements in non-gaming contexts.CHI’11 Extended Abstracts on Human Factors in Computing Systems. ACM, 2011.

Deterding, Sebastian, et al. “From game design elements to gamefulness: defining gamification.Proceedings of the 15th international academic MindTrek conference: Envisioning future media environments. ACM, 2011.

Kachniewska, Magdalena. “Gamification and Social Media as Tools for Tourism Promotion.Handbook of Research on Effective Advertising Strategies in the Social Media Age (2015): 17.

Pellikka, Harri. “Gamification in Social Media.” (2014).

Zichermann, Gabe, and Christopher Cunningham. “Gamification by design: Implementing game mechanics in web and mobile apps.”  O’Reilly Media, Inc., 2011.

Terraforming Celestial Bodies For Human Life: Mars Case Study

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

Manned galactic space travel, coupled with a population likely too large for the Earth, will inadvertently lead to human colonization of different planets and moons. The construction of safe indoor and outdoor living spaces on other planets requires changing the ecology and atmosphere of the planet to be friendly towards the species of Earth. It takes a monumental engineering effort for a planet to become habitable enough for an astronaut to land and be able to safely take off their spacesuit. Since the most viable planet closest to us is Mars, most ecopoiesis/terraforming research efforts are being directed there.

The first phase in making a planet hospitable to the human race is ecopoiesis, where an ecosystem is artificially constructed to begin altering the planet i.e. its atmosphere. The oxygen requirement for human life is nearly nonexistent, so photosynthesis is needed to transform the carbon dioxide of Mars (Thomas 1995). Earth plant life has a minimum requirement of about 10mbar of nitrogen, however, which would first need to be produced by Earth bacteria through denitrification (McKay et al. 1991, Friedman et al. 1995). Because of this, the first step in transforming the biosphere of Mars is to introduce large amounts of nitrogen-releasing bacteria to martian soil (Friedmann et al. 1995).

Once the atmosphere has enough nitrogen in it, plants engineered to survive on as low as 1mbar of oxygen can get to work on converting the massive amounts of carbon dioxide in the atmosphere – there is no observable limit to the maximum concentration of carbon dioxide conducive to Earth plant life. Sending species heavily resistant to UV damage would lead to the best results, as Mars has no ozone layer (Thomas 1995).

The resulting engineered atmosphere will still be too light for human life – the average pressure of Mars’s atmosphere is 6-10mbar while one Earth atmosphere is defined as 1bar. CFCs and other greenhouse gases are being considered as candidates for increasing the thickness of the martian atmosphere to weights friendly to humans. The increase of atmospheric density would also lead to the increase of global temperatures – the average martian surface temperature is -60 Celsius and would ideally need to rise at least 60 degrees to allow for liquid water (Budzik 2000). The increasing temperatures caused from the release of greenhouse gases would lead to the polar caps melting and heating the atmosphere even further in a positive feedback loop (McKay et al. 1991). Budzik claims that an initial increase of 4 degrees Celsius will result in a total increase of 55 degrees (Budzik 2000).

A very careful balance must be struck between the concentrations of the various gases in the newly engineered atmosphere to safely recreate the Earth’s atmosphere. A detailed breakdown of concentrations is provided by McKay et al. in Figure 1:

Figure 1. Atmospheric gas limits for sustainable human life retrieved from McKay et al. 1991

Figure 1. Atmospheric gas limits for sustainable human life retrieved from McKay et al. 1991

This transformation process of gas ratios is very lengthy and convoluted. Birch proposed an alternative method of evaporating the polar caps by using a giant (200,000 ton) mirror to redirect sunlight towards them (Budzik 2000, Birch 1992). Other more immediate methods include dropping bombs or redirecting asteroids to kick up large amounts of dust and make the atmosphere heavier (Budzik 2000).

The methods presented here can be adapted for barren dusty planets with polar caps. As we have not yet tried expanding to planets hotter than our own (e.g. Venus), we have not yet developed methods for terraforming these types of bodies. Although some planets may be completely uninhabitable, it is likely that the astronauts of the far distant future will be able to transform the vast majority of celestial bodies.

 

References:

Budzik JM. 2000. How to Terraform Mars: An Analysis of Ecopoiesis and Terraforming Research 1:17

Thomas DJ. 1995. Biological Aspects of the Ecopoiesis and Terraformation of Mars: Current Perspectives and Research 415:418

McKay CP, Toon OB, Kasting JF. 1991. Making Mars Habitable 489:496

Friedmann EI, Ocampo-Friedmann R. 1995. Advances in Space Research 243:246

Birch P. 1992. Terraforming Mars quickly Abstract

Detecting The Event Horizon of A Black Hole Using Radio Technology

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

Discovering and viewing the event horizon of a black hole has always been a goal of scientists since their speculation. They have intrigued the minds of many, as no one knows exactly what happens at the event horizon, or even if they truly exist as black holes could very much be taken as a mistake for something else. Many attempts in the past have been made to discover and view the event horizons of nearby black holes, and technology is still being used today in newer ways to extend this search and possibly reach the actual goal by finally finding the event horizons and viewing them. Newer methods have been proposed, but have yet to be tested.

Firstly, the black hole itself is difficult to view, as it is “not visible to the outside world” and “no signal can reach the region of space-time outside” a certain “Schwarzchild radius,” and the boundary in between this radius becomes known as the event horizon (Dolan, 2001). In a sense, discovering the black hole or its event horizon become synonymous where detecting the event horizon to the black hole leads to the proving and discovery of the black hole itself. Instead, x-ray and UV vision is used to detect and see black holes generally (Dolan, 2001). Another article discusses how the Hubble Space Telescope was able to collect data from a decade ago and how that data is being synthesized to “observe what seems to be the last gasp emitted by gaseous material spiraling Cygnus X-1, a suspected black hole 6,000 light-years from Earth” (Cowen, 2001). Blobs of hot gas supposedly spiraling and/or orbiting a black hole, radiate pulses of ultraviolet light, growing fainter rapidly and then just simply disappearing, lead to the expectation that these gases are about to enter the event horizon. Furthermore, the light emitted from these gasses “grows dimmer because the black hole’s gravity shifts the light to longer and longer wavelengths,” where radiation actually stops by the time the gas enters the black hole (Cowen, 2001).

Before using technology to find the event horizon, mathematical algorithms were created to find these event horizons, where they would make sense in theory. Once these algorithms were created, they would then be used in conjunction with radio technology, to detect and locate the black hole and its respective event horizon. In specific, Jonathan Thornburg uses “3 + 1 ADM formalism” to conduct calculations that help find the “apparent horizons in numerically-computed spacetimes” (Thornburg, 2001).

A new method of spacing out telescopes a great distance apart, and using the images collected by each of the three telescopes in a unifying manner has allowed scientists to attain a resolution much greater than the one provided by the Hubble Space Telescope. The new technique is called Very Long Baseline Interferometry and it exploits the phenomenon of interference, where multiple light waves are superimposed to amplify a signal (Wanjek, 2008). The technique of VLBI has been used to detect a massive radio source at the center of the Milky Way Galaxy, Sagittarius A. Very Long Baseline Interferometry allowed for three telescopes to achieve Sagittarius A’s first size measurement (Schwarzschild, 2008).

The James Clerk Maxwell Radio Telescope, served as one of three radio telescopes to form an array as part of Very Long Baseline Interferometry to achieve the first size measurement of Sagittarius A, a nearby black hole.

The James Clerk Maxwell Radio Telescope, served as one of three radio telescopes to form an array as part of Very Long Baseline Interferometry to achieve the first size measurement of Sagittarius A, a nearby black hole (Schwarzchild, 2008).

Works Cited

Cowen, R. “Peering At Black Holes: An Eventful Look.Science News 159.3 (2001): 38.

Dolan, Joseph F. “How To Find A Stellar Black Hole.Science 292.5519 (2001): 1079-1080.

Schwarzschild, Bertram. “Radio Interferometry Measures The Black Hole At The Milky Way’s Center.Physics Today 61.11 (2008): 14-18.

Thornburg, Jonathan. “Event And Apparent Horizon Finders For 3+1 Numerical Relativity.Living Reviews In Relativity 10.4 (2007): 1-68.

Wanjek, Christopher. “Radio Dishes Tune In To Event Horizon.Mercury 37.4 (2008): 9.

Music Through the Lens of Neuroscience and Psychology

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

I have chosen the focus of this writing assignment to be apart from my video project about the Mars mission. I play many instruments of music, in a lot different genres, and my love of listening to music is even more diverse and extensive. I use music as a therapy, a creative output, a stimulant, a pastime, and an aide to any social situation. I have always been fascinated about the science behind humanity’s affection for this art of sound, and what neurological and psychological research has uncovered about one of the oldest and most prolific forms of culture.

In one study, 16 participants listened to 5 characteristically happy songs, 5 sad songs, and 10 neutral songs, all in the genre of classical music, while inside of an fMRI machine. Blood oxygenation level dependent (BOLD) signal contrasts were used to indicate which parts of the brain were most active under the influence of the music. With happy music, the ventral and dorsal striatum, anterior cingulate, parahippocampal gyrus, and auditory association areas received BOLD signal contrasts, commonly areas of the brain associated with reward, movement, and emotional processing. The hippocampus/amygdala region received a BOLD signal contrast with sad music, commonly associated with memory and emotions. The study showed the neurological response of different areas of the human brain to music of different moods (Mitterschiffthaler et al., 2007).

Another study used an fMRI machine to compare the neurological differences in language and music. A 29-year-old bilingual pianist, literate in both Japanese and English, underwent a series of fMRI tests while reading Japanese, English, and a musical score. Figure 1 shows images of the fMRI results of the three categories, showing the similarities between processing written music and written language. However, there are differences, such as in the right transverse occipital sulcus, which can be accounted by written music’s representation of rhythm and pitch. These aspects require more cortical processing (Nakada et al., 1998).

Figure 1: fMRI images of the regions of the participant's brain (shown in red and yellow) that responded most to reading the three categories of language: Japanese, English, and music.

Figure 1: fMRI images of the regions of the participant’s brain (shown in red and yellow) that responded most to reading the three categories of language: Japanese, English, and music.

Aside from these fascinating insights of music’s effects on the physical brain, psychological research has also demonstrated music’s strange influence. A study on 60 college students, 30 biology majors and 30 music majors, tried to determine whether musical education influenced biological responses to two types of music. Plasma levels of cortisol and norepinephrine were measured before and after listening to both musical pieces, and galvanic skin responses were measured as well. The study concluded that cortisol levels and galvanic skin responses were significantly higher after listening for music majors than for biology majors, and that cortisol increase is a good indicator of analytical and critical listening to music (VanderArk & Ely, 1993).

A case study of two patients suffering from frontotemporal dementia (FTD) investigated a sudden development of taste in pop music. A 68-year-old lawyer, whose musical preference had always been classical, and who regarded pop music as “mere noise,” began listening to Italian pop music two years after being diagnosed with FTD. After three years, he began listening for hours every day, until his death four years after the diagnosis. A 73-year-old housewife originally could not tolerate listening to music, and only did so for occasional dance songs, until she developed a great affinity for Italian pop music one year after her diagnosis of FTD. This study builds on past research of dementia patients developing greater preferences and abilities in art and music (Geroldi et al., 2000).

79 participants with an ICD-10 diagnosis of depression were randomized to receive individual therapy, in the form of standard care with music therapy or standard care alone, for 20 weeks. Figure 2 displays the results of the study, showing that participants who received the standard care with music therapy had far better improvements in psychiatric scores than those who only received psychiatric care (Erkkila, 2011).

Figure 2: The results of five psychiatric tests, with change in scores on the y-axis and time in months on the x-axis. (a) Montgomery–Åsberg Depression Rating Scale; (b) Hospital Anxiety and Depression Scale – Anxiety; (c) Global Assessment of Functioning; (d) Toronto Alexithymia Scale – 20; (e) Health-related quality of life scale RAND–36.

Figure 2: The results of five psychiatric tests, with change in scores on the y-axis and time in months on the x-axis. (a) Montgomery–Åsberg Depression Rating Scale; (b) Hospital Anxiety and Depression Scale – Anxiety; (c) Global Assessment of Functioning; (d) Toronto Alexithymia Scale – 20; (e) Health-related quality of life scale RAND–36.


Works Cited

Erkkila, Kaakko, Marko Punkanen, Jorg Fachner, et al. “Individual music therapy for depression: randomised controlled trial.The British Journal of Psychiatry. 199, no. 2 (July, 2011) [Cited 20 November 2016].

Geroldi, Cristina, Tizina Metitieri, Giuliano Binetti, et al. “Pop Music and Frontotemporal Dementia.Neurology. 55, no. 12 (December, 2000) [Cited 20 November 2016].

Mitterschiffthaler, Martina T., Cynthia H.Y. Fu, Jeffrey A. Dalton, et al. “A functional MRI study of happy and sad affective states induced by classical music.Human Brain Mapping. 128, no. 11 (November, 2007) [Cited 20 November 2016].

Nakada, Tsutomu, Yukihiko Fujii, Kiyotaka Suzuki, et al.“‘Musical brain’ revealed by high-field (3 Tesla) functional MRI.” Cognitive Neuroscience. 9, no. 17 (December, 1998) [Cited 20 November 2016].

VanderArk, Sherman D., Daniel Ely. “Cortisol, biochemical, and galvanic skin responses to music stimuli of different preference values by college students in biology and music.SAGE Journals. 77, no. 1 (August, 1993) [Cited 20 November 2016].

Cyberknife Linked to Maintaining Cancer Patient Quality of Life Scores Before and After Treatment

Posted by on Dec 3, 2016 in Writing Assignment 7 | No Comments

As previously mentioned, the Cyberknife is a radiosurgery technique used to target and treat cancerous masses in the body. Although this treatment method has become popular recently, various research studies have been published on the Cyberknife.  For example, past literature has evaluated the precision of the Cyberknife, how it is able to track and correct for patient motion, its efficacy, as well as the overall cost-effect ratio of the system.  In addition to exploring all of these factors, it is incredibly important for researchers to evaluate the impact of the Cyberknife system on quality of life for cancer patients.

Before delving into literature on this topic, it is useful to define the phrase “quality of life,” or QOL.  A hypothesized meaning of the term is the measurement of the difference between a person’s expectations for their lives and their present day life experiences (Calman, 1984).  Past research has been published regarding the impact of the Cyberknife system on quality of life for cancer patients.  For instance, a 2014 study evaluated urinary, bowel, and sexual QOL scores for prostate cancer patients at baseline and numerous times post-treatment (Katz et al., 2013).  Out of the patients assessed, mean urinary and bowel QOL declined 1-month post treatment and returned to baseline after two years; at 6-12 months, sexual QOL declined by a mean of 23% and remained stable afterward (Katz et al., 2013).  Similar findings were reported in a follow-up QOL assessment conducted by Katz (2014).  Thus, this research focused on prostate cancer indicates that the Cyberknife does not have a particularly positive impact on patient quality of life post-treatment.

quality of life prostate cancer

Figure 1: Quality of life scores for prostate cancer patients post-cyberknife treatment. The graph represents the mean and standard QOL scores for patients at baseline and up to three years post treatment. Bowel, urinary, and sexual QOL were assessed for this patient sample. The graph indicates that bowel and urinary QOL dipped post-treatment but eventually returned to baseline, whereas sexual QOL slightly decreased overtime without returning to baseline. (Katz et al., 2013)

In addition to prostate cancer, research on this topic has been conducted on patients being treated for spinal tumors.  After undergoing Cyberknife treatment, patients with spinal lesions completed quality of life surveys (Gagnon et al., 2009).  Patients exhibited no significant changes in physical quality of life post-treatment, but they did exhibit significantly higher mental quality of life scores post treatment (Gagnon et al., 2009).  Research published by Degen on spinal tumor treatment suggested that mental and physical well-being QOL remained relatively constant both before Cyberknife treatment and up to two years post-treatment (2005).  Both of these studies on spinal tumor treatment indicated that the Cyberknife system had a significant impact on improving patient pain post-treatment (Gagnon et al., 2009; Degan et al., 2005).

Overall, literature regarding the impact of the Cyberknife system on QOL for cancer patients depict quite similar findings. This treatment method does not have a significant impact on improving or reducing QOL post-treatment, but appears to stabilize QOL overtime.  Although findings were not considered significant, QOL for sexual functioning of prostate cancer patients decreased and was maintained at that level numerous years post-treatment (Katz et al., 2013 & 2014).  Although QOL was not significantly effected for cancer patients with spinal lesions, the Cyberknife system did significantly improve patient pain post-treatment (Gagnon et al., 2009; Degan et al., 2005).

Despite the insignificant findings presented, a majority of the research suggests that the Cyberknife essentially maintained QOL scores before and after treatment.  In the future, it would be important for research to compare QOL scores for patients undergoing Cyberknife versus other treatment methods.  Research in this field could help elucidate which treatment method has the best impact on patients perceptions of their physical and mental QOL and would ultimately enable them to return back to their daily lives at a faster pace.

 

References

Calman, K. C. (1984). Quality of life in cancer patients–an hypothesis. [Abstract]. Journal of Medical Ethics,10(3), 124-127.

Degen, J. W., Gagnon, G. J., Voyadzis, J., Mcrae, D. A., Lunsden, M., Dieterich, S., Henderson, F. C. (2005).CyberKnife stereotactic radiosurgical treatment of spinal tumors for pain control and quality of life [Abstract]. Journal of Neurosurgery: Spine, 2(5), 540-549.

Gagnon, G. J., Nasr, N. M., Liao, J. J., Molzahn, I., Marsh, D., Mcrae, D., & Henderson, F. C. (2009). Treatment Of Spinal Tumors Using Cyberknife Fractionated Stereotactic Radiosurgery [Abstract]. Neurosurgery, 64(2), 297-307.

Katz, A. J., Santoro, M., Diblasio, F., & Ashley, R. (2013). Stereotactic body radiotherapy for localized prostate cancer: Disease control and quality of life at 6 years. Radiation Oncology, 8(1), 118-125.

Katz, A. J., & Kang, J. (2014). Quality of Life and Toxicity after SBRT for Organ-Confined Prostate Cancer, a 7-Year Study. Frontiers in Oncology, 4.

 

 

Can Steam Autoclaving be a Possible Alternative to Incineration?

Posted by on Dec 2, 2016 in Writing Assignment 7 | No Comments

Steam autoclaving is a procedure that uses steam, heat, and pressure to inactivate microorganisms and sterilize medical tools (Schipanski, 1969).  It was considered to be one of the newest and best alternatives to treat medical waste other than incineration. The U.S. Environmental Protection Agency reported that over 90% of the infectious medical waste was treated by incineration before 1997 (EPA, 2016). However, it was found that incinerator emits toxic pollutants that are detrimental to the environment and human health. Dioxin is a typical example of hazardous emission which can lead to birth defects, cancer, diabetes, and immune system disorder (Cole, 1997). Ineffective medical waste management can cause serious public health problems. For instance, improperly transporting mix wastes to open dumpsite can cause soil and groundwater pollution, which may lead to sickness in humans and species living around the disposal site (Su, 2005). For this reason, researchers have been very interested in investigating alternative methods for medical treatment in terms of more cost-effective and safer disposal practices.

As far as medical waste autoclaving is considered to be a relatively new medical waste disposal method, it was actually invented in 1965 by Emil R. Schipanski. An autoclave was designed to accomplish sterilization by inactivating bacteria under steam pressure in a closed chamber (Schipanski, 1969). The procedure would require a shorter time if a higher temperature is applied, or a longer time if a lower temperature is applied to sterilize a population.  The optimal condition for sterilization was determined to be at 121 °C and 131 °C for 60 and 30 minutes, respectively (Hossain et al., 2012). Recent studies have found that autoclaving is very advantageous in various ways. It does not require the input materials to change its physical forms whereas incineration burns waste into ashes and gas. Medical tools that were autoclaved can be either recycled and reused again or safely transported to landfills (Hossain et al., 2012). Autoclaving is also found to be more cost-effective than incineration on an economic scale. Although the capital investment to install an autoclave might be triple the price of an incinerator, this loss can be regained through its low maintenance fee and energy consumption. It is also simpler and safer to operate, which can reduce the chance of health care workers getting injured from the handling process (Ferdowsi et al., 2010). The Bio-Medical Waste Rules of India highly recommended using autoclaves to disinfect and treat infectious bio-medical waste (Government of Indian, 2016).

autoclave

Table 1. The comparison of costs for autoclave and incineration. Table taken from Ferdowsi, 2010.

Steam autoclaving also has several disadvantages. Autoclaving was found not suitable to treat laboratory chemicals, organic solvents, anatomical, pathological, bulky, low-level radioactive, and chemotherapy wastes (Al-Khatib et al., 2009). In addition, the unexpected regrowth of bacteria on sterilized medical tools has been overlooked by many people. A recent study tested the effectiveness of steam autoclaving in deactivating 6 types bacteria. 3 Gram-positive and 3 Gram-negative bacteria were cultured in medical-waste-like environments and autoclaved. All bacteria were found to be inactivated during day zero and day one after the treatment. However, Gram-positive bacteria started regrowing two days after the treatment. After six days, the regrowth of all the bacteria appeared in sterilized tools. This finding indicated steam autoclaving’s failure in achieving sterilization and the results refuted autoclaving as an alternative to incineration (Hossain et al., 2012).

capture

Table 2. Re-growth of bacteria in sterilized waste. Table taken from Hossain et al., 2012.

Although the public is awared of the potential of incineration to emit harmful pollutants, the difficulty of finding a flawless alternative keeps incineration as a popular method of medical waste disposal treatment. Autoclaving does have various advantages such as its cost effectiveness and its potential to have medical waste recycled and reused. However, the failure of its primary goal in sterilization outplayed the advantages. It can only deactivate microorganism for a short period of time whereas incineration is more powerful in eliminating infective agents. Thus, steam autoclaving should not be recommended as a replacement of incineration.

 

Work cited

Al-Khatib, I.A., Al-Qaroot, Y.S., Ali-Shtayeh, M.S. (2009). Management of healthcare waste in circumstances of limited resources: A case study in the hospitals of Nablus city, Palestine. Waste Manag Res. 2009; 27: 305-312.

Cole, E. (1997). Application of Disinfection and Sterilization to Infectious Waste Management. North Carolina Board of Science and Technology.

Ferdowsi, A., Ferdosi, M., & Mehrani, M. J. (2013). Incineration or Autoclave? A Comparative Study in Isfahan Hospitals Waste Management System (2010). Materia Socio-Medica, 25(1), 48–51. http://doi.org/10.5455/msm.2013.25.48-51

Government of India and Ministry of Envirnoment, Forest and Climate Change. (2016). Bio-Medical Waste (Management and Handling Rules). Gazette of India, Extraordinary, Part II, Section 3, Sub-section (i).

Hossain, S., Balakrishnan, V., Rahman, N.N., Sarker, Z.I., & Kadir, M.O. (2012). Treatment of Clinical Solid Waste Using a Steam Autoclave as a Possible Alternative Technology to Incineration. International Journal of Environmental Research and Public Health, 9(12), 855-867. doi:10.3390/ijerph9030855

Schipanski, E. (1969). Autoclave. US patent.

Glenn, G.S. (2005). Water-borne illness from contaminated drinking water sources in close proximity to a dumpsite in Payatas, The Philippines.

U.S. Environmental Protection Agency. (2016). Medical Waste.  Retrieved from https://www.epa.gov/rcra/medical-waste

 

 

Lina Mohamed-Assignment 7

Posted by on Dec 2, 2016 in Writing Assignment 7 | No Comments

Lina Mohamed

MHC Writing Assignment 7                                                                                         Professor Kowach

How do pills Affect our bodies?

Overdosing on prescription drugs and misusing them is a big issue that needs to be addressed. There are so many dangers to poly-drug use. People assume that poly-drug use is not going to occur as long as they steer clear of alcohol while taking certain drugs, painkillers. However, poly-drug effects can happen by simply taking more than one prescription medication for recreational purposes. 2

Mixing opiates and benzodiazepines (benzos) can lead to some serious symptoms and continued misuse and mixing of drugs can lead to internal damage in our bodies. Pills can damage our lungs, stomach, intestines, liver, muscles and kidneys.

Lungs are affected because these drugs suppress the body’s ability to breathe and therefore is associated with a risk of pneumonia. Drug abusers will also experience shortness of breath. Opiates also cause constipation at a normal dosage so overdosing will cause serious problems. Addicts usually end up relying on laxatives to move the bowels or other risk damages. This is a symptom called NBS or “narcotic bowel syndrome”.

The liver is probably one of the most affected organs because pills are broken down and processed by the liver. Therefore, overdosing or taking lots of pills stresses the liver heavily and ends up carrying many toxins from the breakdown process. This is mainly due to the acetaminophen that is many formulas of these drugs. High levels of acetaminophen can cause liver failure in severe cases. 50,000 people are rushed to emergency rooms each year and about 200 die. 3

Abuse of painkillers and exceeding combination suggestions can also have a damaging effect on the kidneys. Severely overdosing can even lead to transplants or dialysis. However, opiate does not cause disable the kidney, it is the secondary analgesics found in acetaminophen. 3

These effects are caused from taking pills and the effects of snorting or injecting these drugs are even worse. People are still not as scared as they should be or just do not fully understand the dangers because it is not advertised enough. There can be simple changes made to reduce the number of overdosing amongst certain groups of people. There are certain groups that misuse pills more than other groups. Health insurers can identify and address improper prescribing and use of painkillers. They can also increase coverage for other treatments to reduce pain, such as physical therapy and other homeopathic remedies.

 

 

paper-7

Figure 1: Shows which states have higher overdose rates illustrated by darker colors (CDC).

References:

 

1-Deepak, C., M.D. (2002) Alternative Medicine: The Definitive Guide (2nd Edition) Retrieved from Google Scholars: https://books.google.com/books?hl=en&lr=&id=OyrhatOdk9gC&oi=fnd&pg=PA270&dq=natural+alternatives+to+painkillers&ots=68fhOAE58y&sig=-malo2jqTO8ylS5N_CVlAN-PFSo#v=onepage&q=natural%20alternatives%20to%20painkillers&f=false

 

2-(2002) Dangers of Mixing Opiates and Benzodiazepines: Vicodin, Xanax, Oxycodone and Valium. American Addiction Centers. Retrieved from:

http://americanaddictioncenters.org/prescription-drugs/dangers-of-mixing/

 

3- What Parts of the Body May Be Severely Damaged by Painkiller Abuse? Narconon. Retreived from:

http://www.narconon.org/drug-abuse/prescription/painkillers/body-damage.html

4-Unknown. (2013, n. pg.) Politicization of Health Care Preventing Real Changes to Out-Of-Control System, Researchers Suggest. John Hopkins Medicine. Retrieved from Google Scholar:

http://www.hopkinsmedicine.org/news/media/releases/politicization_of_health_care_preventing_real_changes_to_out_of_control_system_researchers_suggest

 

5–(Ullman, 1988) Homeopathy Medicine For The 21st Century. The Futurist. Retrieved from:

http://crawl.prod.proquest.com.s3.amazonaws.com/fpcache/943614fc1c15f8962b8e86ff8603fffc.pdf?AWSAccessKeyId=AKIAJF7V7KNV2KKY2NUQ&Expires=1477266797&Signature=%2F925B7VJ%2FgV1kwnxH5DTa9%2FbfLM%3D

 

The Skillset Needed to Succeed in eSports

Posted by on Dec 2, 2016 in Writing Assignment 7 | No Comments

Similar to how sports has multiple categories of sports, such as basketball, baseball, soccer, etc., eSports also has multiple subsections of eSports. “eSports are commonly organized around specific genres of games, such as Multiplayer Online Battle Arenas (e.g. League of Legends, Dota 2), First-Person Shooters (e.g. Counter-Strike: Global Offensive), Real Time Strategy (e.g. Starcraft 2), Collectible Card Games (e.g. Hearthstone) or Sports games (e.g. FIFA-series), therefore they form many sub-cultures within eSports, in the same way that ‘traditional’ sports do” (Hamari, 2015). Each subsection of eSports takes a different set of skills to become the best of the best. Professional gamers of every subsection train and practice up to 12 hours a day to maintain and improve their level of play.

Starcraft 2, a Real Time Strategy games, is one of the most demanding games in the industry. “As in chess, the object of the game is to defeat your opponent’s army. Unlike chess, however, StarCraft doesn’t involve players taking turns, and requires more complex resource management in that you must continually generate the pieces at your disposal as you play. To do as well as the pros, you must also achieve an extremely rapid rate of keyboard and mouse inputs. Some players carry out more than 300 such actions a minute, rising to 10 a second when up against it. Add in the need to think strategically and outwit your opponent by pre-empting their moves, and the top players start to look superhuman” (Heaven, 2014). One action in Starcraft is equivalent to things such as sending your unit to a location, attacking an enemy, expanding your army, utilizing resources, etc. To put 10 actions per second into perspective, try doing 10 of the above things in a mere second every second for the entirety of the game. Its incredibly demanding and seems nearly impossible, yet professional gamers make this look easy.

Not only does eSports take superhuman reflexes and dexterity, it also requires strategy to win. At the professional level, a player that plays without a plan or strategy will lose nearly 100% of the time. “Skillful play in eSports should not be limited to technical dexterity… but also includes sporting intelligence… Central to the notion of sport is to outsmart the competition… To accomplish this, a successful eSports player must possess comprehensive knowledge and skills, ‘with game sense and (tactical and strategic) judgment to act effectively to settle the issue at hand or help the [player] solve the game problem’” (Jenny, 2016). Applying this to Starcraft 2, players develop an initial strategy and follow it through, adjusting it accordingly to the opponent’s actions. All of this is done in real time at the same time they are performing these 10 actions per second, which is truly incredible.

Several games require professional gamers to make crucial in game decisions while in the heat of the moment. “A game of this type is called a strategic-form game according to game theory. In order to find a solution, also called Nash equilibrium, we need to compute the so-called game matrix consisting of the individual payoffs or outcomes each team obtains from choosing one or the other strategy” (Wagner, 2006). Decisions have to be optimal, because otherwise a bad decision can quickly be exploited by opponents of professional caliber. Decisions have to be quick, as well. Players cannot hesitate in their actions. While they are hesitating, the opponent is not, which will cause them to lose the advantage.

esports3

Figure 1: Professional gamer “Pobelter” moves his eyes every 0.07 seconds to take in game information (Erzberger, 2016)

As shown in Figure 1, a professional player moves his eyes every 0.07 seconds while in game. This is 3x faster than the time it takes a person to read a word. While in game, there is an incredible amount of information taken in every second. Information is always changing as well, since games are dynamic and played in real time. Professional players have to take in this information and decide what they should do within seconds that will result in the best-case scenario to achieve victory.

From a combination of incredible reflexes and hand-eye coordination, real time strategy adaptation, and instantaneous decision making, being a professional gamer is not an easy as you think.

References

Erzberger, T. (2016, October 28). Mid lane whiz Pobelter scores 41 on the Wonderlic test.

Hamari, J., & Sjöblom, M. (2015, November 6). What Is eSports and Why Do People Watch It? Internet Research, 27(2).

Heaven, D. (2014, August 16). Rise and rise of esports. New Scientist, 223(2982), 17.

Jenny, S. E., Manning, R., Keiper, M. C., & Olrich, T. W. (2016, March 11). Virtual(ly) Athletes: Where eSports Fit Within the Definition of “Sport”. Quest, 1-18.

Wagner, M. G. (2006, January). On the Scientific Relevance of eSports.