The Socio-Political Influence of Television

Posted by on Nov 30, 2016 in Writing Assignment 3 | No Comments

Television has proven to be one of the most powerful senders of communication, primarily due to its combination of both artistic freedom and sphere of influence. It has become a staple of the American home, and with the introduction of streaming platforms and video-on-demand, its omnipresence has turned into access to any show at any time all within the click of a button (Lotz, 57). The allure of escapism and entering a storyline that is both far but relevant enough to reality is why television has cemented itself into day-to-day activities (Hamilton, 403). In addition, the shows and commercials seen throughout our lifetime act as catalysts for the trends and movements that occur in everyday society. The issues rooted in a show’s premise – spanning from race, religion, class, and gender — can influence and even change the way an audience thinks about said topic. Moreover, the content produced on television has also been dictated by the changing social climate of both national and global societies. One major social movement that has gained traction over the past decade is visibility and human rights for the LGBT community, “Moreover, some digital television tools – morphing, for example – are very well suited for representing continuity, fluidity, and the implosive destruction of ‘classic’ binary oppositions” (Reifova, 1238). Television contains a cultural sensitivity not found in print media like newspapers and magazines, and is suited accordingly to meet the constant evolution of a multi-faceted audience.

Figure 1. Breakdown of where people watch shows.

Figure 1. Breakdown of where people watch shows.

Television has broken high ground on even bigger social and political movements, resulting from a more liberal, reactionary audience. Feminism is a trademark example of changing viewership and a demand for anti-sexist programming. For instance, in archiving British television, the current content placed under “women’s programming” is no longer in sync with older shows containing material focused on domestic chores such as cooking, fashion, and child care (Moseley, 156). This is because gender stereotypes in the Western world are no longer stagnant; they have achieved of fluidity that can be attributed to a more socially-motivated generation of teens and adults. In addressing political issues, television finds itself lost between sensationalist headlines and neutral, factually-sound news. Channels like C-SPAN and PBS have limited coverage and deteriorating funding because of: 1) a lack of the public’s interest 2) a very dry perspective on governmental policies. Additionally, since government T.V. does not generate an exorbitant amount of revenue, there are simply less staff and reporters working on this area of media (Gormley Jr., 357). The social effect, however, is that there is a less-educated public on issues that are important to their overall lives and an industry driven on flashy, ephemeral content.

 

Works Cited:

  • Reifová, Irena. “’It Has Happened Before, It Will Happen Again’: The Third Golden Age of Television Fiction.” Sociologický Časopis / Czech Sociological Review, vol. 44, no. 6, 2008, pp. 1237–1238. www.jstor.org/stable/41132684.
  • Lotz, Amanda D. “What Is U.S. Television Now?” The Annals of the American Academy of Political and Social Science, vol. 625, 2009, pp. 49–59. www.jstor.org/stable/40375904.
  • Hamilton, Robert V., and Richard H. Lawless. “Television Within the Social Matrix.” The Public Opinion Quarterly, vol. 20, no. 2, 1956, pp. 393–403. www.jstor.org/stable/2746311.
  • Moseley, Rachel, and Helen Wheatley. “Is Archiving a Feminist Issue? Historical Research and the Past, Present, and Future of Television Studies.” Cinema Journal, vol. 47, no. 3, 2008, pp. 152–158. www.jstor.org/stable/30136123.
  • Gormley, William T. “Television Coverage of State Government.” The Public Opinion Quarterly, vol. 42, no. 3, 1978, pp. 354–359. www.jstor.org/stable/2748298.

Repetition and Order: How Space Filling Fractal Curves Exhibit Aesthetics

Posted by on Nov 9, 2016 in Writing Assignment 3 | No Comments

A space filling curve is a line that can be drawn continuously, without lifting a pen if done on paper.  However, when this curve is drawn infinitely, it completely fills a square without any holes [1].  One example of a curve like this is the Hilbert curve.  To produce the Hilbert curve, take the n step, and using the method of producing the n+1 step, which is detailed within Figure 1., use this same method any number of times to get the desired iteration.  As the iteration number increases, the curve gets longer, and when done infinitely it will completely fill the space.

hilbert

Figure 1. The process of getting from the first iteration to the second of the Hilbert curve

 

hilbert_curve-svg

Figure 2. The first six iterations of the Hilbert curve (Wikipedia)

 

The Hilbert curve, or any space filling curve, may just be a theoretical way of solving a given problem in mathematics.  The mathematician Peano first discovered the space filling curve in order to disprove that a continuous curve does not have to be enclosed within a region.  Yet, one may argue that these curves do hold aesthetic value, due to their composition of fractal nature.

What gives fractals this sense of aesthetic?  One idea might be that of repetition.  Due to the fact that fractals are self-similar, they do exhibit a form of repetition, producing the same form infinitely, growing smaller with each iteration.  Ellen Levy argues two points for the general concept of repetition in art, which we can apply to repetition by fractals.  Firstly, repetition “…helps defer closure in a work of art by establishing expectations of recurrence while giving pleasure to the viewer” [2].  Secondly, “Active repetition in art can evoke evolutionary processes…” which can be defined as the repetitive aspect of nature.  However, we cannot use this second point to describe fractals because nature is, in fact, fractal.  Gleick states that “Clouds are not spheres. Mountains are not cones. Lightning does not travel in a straight line” [3].  Defining the aesthetics behind fractals by something that is inherently composed of fractal forms creates a cyclical argument.

Instead, fractals may be considered to be art due to the concepts of chaos and order.  On their surface, fractal images may appear to be complex, intricate shapes.  However, there is a great deal of order and pattern to fractals.  This sense of order produces a natural “subconscious curiosity to find relation, symmetry and recurrence” [4, p. 226]. Garousi and Kowsari argued that these fractals of pure order and regularity simultaneously exist with chaos and disorder, much like the outer world we live in [4].  Making sense of the complexity by using this order, we can begin to understand fractals, as well as appreciate them as an art form in this light.

Take, for example, the Koch Snowflake.  The edges exhibit the fractal design, which repeat infinitely.  It may be difficult to understand how the edges work by purely looking at it, but understanding the process of how to create the figure, one may be able to see the underlying structure and how to produce an infinite shape.  In this way, the fractal takes a seemingly complex image and expresses it as a simple formula, step by step, giving it order.

362px-kochflake-svg

Figure 3. The first four iterations of the Koch snowflake (Wikipedia)

Because of all this, we can begin to understand how a space filling curve can be a form of art.  These fractal curves take an infinite set of points in some multidimensional space, such as a plane or space (even a hyperspace).  It then connects all these points according to a specific curve, repeating a pattern until all points are on this curve.  This set of infinitely is therefore simply expressed by one continuous line, producing order to a concept we cannot conceive.  Fundamentally, these space filling curves contain the same artistic values as other fractals, so that we can conclude these curves are, in fact, a form of art.

 

References:

[1]  Séébold, Patrice. “Tag-Systems For The Hilbert Curve.” Discrete Mathematics & Theoretical Computer Science (DMTCS) 9.2 (2007): 213-226. Academic Search Complete. Web. 9 Nov. 2016.

[2]  Levy, Ellen K. “Repetition And The Scientific Model In Art.” Art Journal 55.1 (1996): 79. Academic Search Complete. Web. 9 Nov. 2016.

[3]  Liebovitch, Larry S., and Daniela Scheurle. “Two lessons from fractals and chaos.” Complexity 5.4 (2000): 34-43.

[4]  Garousi, Mehrdad, and Masoud Kowsari. “Fractal Art And Postmodern Society.” Journal Of Visual Art Practice 10.3 (2012): 215-229. Academic Search Complete. Web. 9 Nov. 2016.

[5]  Arlinghaus, Sandra Lach. “Fractals Take a Central Place.” Geografiska Annaler. Series B, Human Geography, vol. 67, no. 2, 1985, pp. 83–88. www.jstor.org/stable/490419.

 

Language, Consciousness, and Bilingualism: Do the Languages We Speak Really Effect Thought Patterns?

Posted by on Oct 31, 2016 in Writing Assignment 3 | No Comments

Language and consciousness are inseparable. Sure, consciousness exists without language; plenty of non-verbal animals are conscious in a similar way to humans. However, human consciousness in the way that we know it does not exist without language. At one point in history it may have, but no longer. Furthermore, language has never existed without consciousness. Language can be said to be one of multiple “agents of consciousness”. Unfortunately, defining consciousness is a rather difficult task. As such, defining the relationship between language and consciousness is significantly challenging. In fact, Jens Allwood states in Some Comments on Wallace Chafe’s “How Consciousness Shapes Language” that “pursuing this task [the question of the relationship between consciousness and identification, understanding and interpretation] will involve an in-depth probe into the nature of the relation between consciousness and the need for background information”. Physically one can see the connections between the parts of the brain responsible for both written and spoken language comprehension in the diagram below produced by The Human Connectome Project of the University of Southern California. The second diagram displays these connections within the human brain as a whole.

screen-shot-2016-10-30-at-11-49-04-pm

Figure 1. Structural Connectivity Between Language Processing Systems in the Human Brain. The figure displays the strength of connection between the language processing structures of the brain; the warmer the color, the stronger the connection.

 

white-matter-fibers-brainstem-and-above-720x693

Figure 2. Overall Connectivity of White Matter Fibers in the Human Brain. This figure shows the connectivity between white matter fibers in the brain as a whole; it is the information from Figure 1 put into context

We know that the use of language assumes that a person has a sort of cursory knowledge of each word being used and the topic of discussion. Wallace L. Chafe defines this in Language and Consciousness as “the linguistic distinction between given and new information”. Chafe asserts that differences in language create differences in assumed given and new information. Furthermore, depending on the language a speaker may be blissfully unaware as to what is given or new information in their lingua franca and that of another. Coming back to Allwood, one must consider that, at its most basic, language is composed of “intonation units”. This being the concept that each sound we make corresponds to a specific idea. Allwood argues against the validity of such units as concrete concepts and their correlation to one specific idea or concept. Even so, the ability of language to create ideas, images, and concepts within our minds is clear. As such, the connection between language and consciousness is undeniable.

This connection bears a specific relevance to the concepts of bilingualism and second language learning. The question arises, as asserted by Richard W. Schmidt in The Role of Conciousness in Second Language Learning, whether or not language learning is a conscious process, an unconscious process, or some combination of the two. Furthermore, one could even question whether or not conscious and unconscious thought are even to be considered as separate in the first place. Schmidt concludes that the role of conscious thought in language learning has been underplayed and that unconscious “picking up” of language does not play quite as large a role as we believe it to. This assumption may stem from the fact that not enough research has been done in terms of the conscious role of second language learning.

An experiment conducted by the University of Newcastle upon Tyne in England has attemoted to objectively place the differences in thought process and categorization created in bilingual minds. The researchers sought to observe the difference in the way bilingual speakers of English and Japanese categorized objects. It seems that a subject’s the level of familiarity with English changed whether test subjects more easily separated objects into categories of shape or material. For example, subjects were given an object such as a cork pyramid and asked to match it to either one of the objects presented to them. One of these matched the original object in shape (a plastic pyramid) and one matched in material (a piece of cork). The more experienced in English the subject was, the more often they categorized an object by shape. Monolingual English speakers also more often categorize by shape while monolingual Japanese speakers more often categorize by material. In reference to consciousness, this may mean that English speakers inherently think in a more concrete objective based fashion while Japanese speakers think in a more abstract and profound fashion when it comes to categorization. This raises the question of whether or not speaker’s actual consciousness is shaped by the language(s) they speak.

While it is a widely held belief that this is indeed the case, John H. McWhorter in The Language Hoax: Why the World Looks the Same in Any Language disagrees. McWhorter does not disagree that because language and culture are inherently connected, different cultures and thus languages have words for concepts in that culture not found in other languages. However, he disputes that this means that these speakers of different languages think about these concepts differently. Therefore, he disagrees with the belief that speaking a certain language creates a certain worldview or way of experiencing life in the speaker. Nevertheless, research like this displays the profound connections between language and consciousness. Whether or not this consciousness consists of worldviews or the simple fact that language has the unique ability to conjure images in our minds is irrelevant. The way we think is indubitably related to the very human action of connecting concepts to sounds and symbols.

 

References

Allwood, J. (1996). Some Comments on Wallace Chafe’s “How Consciousness Shapes Language” Pragmatics and Cognition, 4(1), 55-64. Retrieved October 19, 2016.

Chafe, W. L. (1974, March). Language and Conciousness. Language, 50(1), 111-133. Retrieved October 19, 2016, from JSTOR.

Cook, V., Bassetti, B., Kasai, C., Sasaki, M., & Takahashi, J. A. (2006). Do Bilinguals Have Different Concepts? The Case of Shape and Material in Japanese L2 Users of English [Abstract]. International Journal of Bilingualism, 10(2), 137-152. Retrieved October 19, 2016.

Hoge, K. (2014, August 7). The Language Hoax: Why the World Looks the Same in Any Language, by John H. McWhorter. Retrieved October 19, 2016, from https://www.timeshighereducation.com/books/the-language-hoax-why-the-world-looks-the-same-in-any-language-by-john-h-mcwhorter/2014926.article

Schmidt, R. W. (1988, July). The Role of Consciousness in Second Language Learning. Applied Linguistics, 11(2), 129-158. Retrieved October 19, 2016, from Oxford journals.

The Bridge between RNA and Protein- Translation and Protein Synthesis

Posted by on Oct 23, 2016 in Writing Assignment 3 | No Comments

Protein synthesis is a complicated process starting from the transcribing of mRNA from DNA, to the translation of mRNA into the peptide bond-chains. In Prokaryotes this process of transcription and translation can actually occur simultaneously, but in Eukaryotes they occur separately in different areas of the cell. In the figure we can see a visual example for the translation process.

A simplification of the translation process that showcases some of the multiple different RNAs and how the peptide chain grows in length.

A simplification of the translation process that showcases some of the multiple different RNAs and how the peptide chain grows in length.

In Eukaryotic organisms, there many different stages and versions of RNA. Through RNA we could find a way to modify protein synthesis in cells. One way is through animal microRNA- it was actually able to regulate gene expression. MicroRNAs are RNA regulators that control expression at the post-transcription level in Eukaryotes (Pillai 2007). Translation was actually restrained and as a result, hundreds of proteins from the genome were not produced (Selbach, 2008). We could possibly find a way to select what genes we want to be expressed through the manipulation of the RNA and how it regulates translation. We can see this type of research in influenza studies. Short interfering RNAs are shown to be able to hurt influenza virus production by attacking the RNAs of the virus, and not the RNA of the infected cell (Ge 2003).

The amount of information that can be contained in RNA is outstanding. Besides protein synthesis, there is even data that suggests the RNA may also hold information that codes for the death of the cell itself. Some neurons were found to die after becoming deprived of a growth factor, but when repressors were applied to the transcription/translation process, the cell continued to live and function well (Martin 1988). The fact that the cell had to produce proteins in order to die, supports the evidence that it’s own death was literally in the cell’s genetic code.

However, that same code contains processes that would protect the RNA from denaturation. In Eukaryotes, the RNA/DNA contains introns, “junk DNA,” that don’t code for anything. During the splicing process, where the junk DNA is cut out of the the RNA, if a mild heat shock is administered, the RNA will actually code for proteins that will protect the transcript process from heat that will denature the entire transcript, thus saving itself (Yost 1986).

Sources

Pillai, Ramesh S., Suvendra N. Bhattacharyya, and Witold Filipowicz. “Repression of Protein Synthesis by MiRNAs: How Many Mechanisms?”Trends in Cell Biology 17.3 (2007): 118-26.

Selbach, Matthias, Björn Schwanhäusser, Nadine Thierfelder, Zhuo Fang, Raya Khanin, and Nikolaus Rajewsky. “Widespread Changes in Protein Synthesis Induced by MicroRNAs.” Nature 455.7209 (2008): 58-63.

Ge, Q., M. T. Mcmanus, T. Nguyen, C.-H. Shen, P. A. Sharp, H. N. Eisen, and J. Chen. “RNA Interference of Influenza Virus Production by Directly Targeting MRNA for Degradation and Indirectly Inhibiting All Viral RNA Transcription.” Proceedings of the National Academy of Sciences 100.5 (2003): 2718-723.

Martin, D. P. “Inhibitors of Protein Synthesis and RNA Synthesis Prevent Neuronal Death Caused by Nerve Growth Factor Deprivation.” The Journal of Cell Biology 106.3 (1988): 829-44.

Yost, H.joseph, and Susan Lindquist. “RNA Splicing Is Interrupted by Heat Shock and Is Rescued by Heat Shock Protein Synthesis.” Cell 45.2 (1986): 185-93.

The Outdated Laws of Outer Space

Posted by on Oct 19, 2016 in Writing Assignment 3 | No Comments

Outer space is abound with a virtually infinite supply of materials that could redefine life on Earth. As noted by Reynolds, “The smallest known near-Earth metal asteroid contains more metal than has been mined by humanity since the beginning of time” (Reinstein 1999). One of the most valuable resources in our solar neighborhood is Helium-3 on the Moon, with reserves estimated to be capable of creating ten times as much energy as the Earth’s fossil fuels (Reinstein 1999). It seems like the sky is next frontier of business – but where are all the competitors?

It turns out that the basis of international law regarding outer space comes from the Cold War. The United States and USSR created the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Outer Celestial Bodies, OST for short, in an attempt to stop each other from militarizing space (McDougal et. al 1963). One-hundred other nations have signed onto this treaty, making it weakly international law (Reinstein 1999).

The OST essentially approves of “exploration and use” of space and its celestial bodies, but “carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and [for] the province of all mankind” (Reinstein 1999). Since the goal of the treaty was to prevent militarization, and not promote private business, many of the terms are left vague – even “outer space” is not properly defined (Keefe 1995). This poor wording makes the legal risk of colonizing and excavating sections of celestial bodies outweigh any potential benefits for modern business.

Even worse is the OST’s outlook on property rights. “For the benefit and in the interests of all countries” implies that every country will be given equal geological footing in outer space; that is, no country will be able to claim any land as theirs. But because countries maintain jurisdiction over the astronauts they send to this land, the astronauts themselves are not allowed to claim any land for themselves (Reinstein 1999). Any resources mined or materials otherwise gained by a business would need to somehow be partitioned among all countries, even if funded privately (Reynolds et. al 1989). This partitioning would likely require an international committee, and it is not hard to see how quickly the OST complicates every aspect of space colonization (Reinstein 1999). Since this treaty disincentivizes individual effort, it is highly unlikely that any cosmic exploration mission will be launched anytime soon (Keefe 1995).

Gruner suggests the opposite ethical approach to the OST – focus on the ambitions of the first-world countries. He argues that we must colonize space before overpopulation and resource depletion start destabilizing our world. Expansion of the human race into outer space, if done right, could remedy both of these problems. Motivation for potential colonizers and miners would draw upon the 19th century American ideas of land ownership. Instead of racing out west, however, settlers would be racing to the Moon and to Mars (Gruner 2004).

The current set of treaties outlining the laws of outer space are so outdated that many nations nowadays view them more as guidelines for future laws than as laws themselves (Gruner 2004). Whether humanity comes together and puts every country on equal footing or chooses to expedite the efforts of richer countries, the overall format and effect of our laws regarding outer space exploration need to be redrafted if we wish to ever settle another planet.

 

References

Reinstein EJ. 1999. Owning Outer Space 59:74

McDougal MS, Lasswell HD, Vlasic IA, Smith JC. 1963. The Enjoyment and Acquisition of Resources in Outer Space 541:560

Reynolds GH, Merges RP. 1989. Outer Space: Problems of Law and Policy 275:278

Keefe H. 1995. Making the Final Frontier Feasible: A Critical Look at the Current Body of Outer Space Law 345:371

Gruner BC. 2004. A New Hope For International Space Law: Incorporating Nineteenth Century First Possession Principles Into The 1967 Space Treaty For The Colonization Of Outer Space In The Twenty-First Century 299:357

Method of Instruction and Effective Teaching

Posted by on Oct 18, 2016 in Writing Assignment 3 | No Comments

Effective teaching is a rare sight nowadays. Effective teaching is when a professor can correctly gauge their students’ performances, and appropriately assist them in their learning and understanding of new material. “Too often the assumption is made that because the instructor has a M.S. or Ph.D. degree, he is ready to take over teaching any subject in his field without advanced preparation… Teaching is the organization of learning. Thus, any instructor, regardless of his knowledge of subject matter, must spend considerable time organizing his courses before effective teaching is attained in his classes” (Gibson, 1954).

A professor’s method of teaching arguably has the greatest impact on how a student learns. Every single professor has his/her own style of teaching. Whether it be by oral lecture, reading off presentation slides, exercise-heavy lectures, class discussions, or hands-on instruction, every style has its own pros and cons. Studies show the learning environment, such as how accessible the instructor is, feedback, engagement and activity, etc., has the greatest impact on student performance (Bell, 2013).

Though professors and students both have preferred teaching methods, the hands-on instruction method generally has positive effects. “Hands-on science… is an educational experience that actively involves people in manipulating objects to gain knowledge or understanding. The key word is ‘actively’. Learning is not a passive activity. When studies are involved in hands-on science, they are engaged in their science learning, and thus promoting scientific literacy” (Bigler, 2011). This makes a good point in that having actively engaged students is one of the key components to effective teaching.

Hands-on instruction is great, but is only viable for specific classes such as the sciences. For other courses, such as Discrete Mathematics and Psychology, one has to be exceptionally creative to design a hands-on lesson. In this case, there are other ways to improve teaching technique. Just to name a few: “Begin each class presentation with something easy, something all students can grasp or accomplish.”, “Informal notes set a better atmosphere than does a script; they also allow for student interaction, interruption, and dialog.”, “Be available regularly and consistently to your students. Keep your office or lab hours!” (Kelly, 1973).

An experimented was conducted that used teacher instruction as the independent variable. In the CMI group, immediate proctor feedback on quizzes was provided to students. In the CMI Lecture group, a weekly rotation of immediate feedback was provided one week, followed by a lecture discussion the next week. In the Independent Study group, students worked at their own pace on assigned texts. In the Contact Control group, lecture discussion, as well as small group discussions and lectures were used. In the Delayed Contact group, students took the pretest and posttest before studying course material and texts. After their studying, they wrote unite summaries

screen-shot-2016-10-18-at-2-52-08-pm

Figure 1: Shows the results of five different teaching methods on students (Moore, 1976).

Figure 1 shows that the CMI group performed the best, while the Delay Contact group performed the worst. The CMI group with immediate feedback performed the best and had the greatest gains. The CMI Lecture group with biweekly feedback performed second best. The Independent Study group had the same gain as the Contact Control group. And the Delayed Contact group performed the worst. A trend can be made where the group that had the more effective feedback had better posttest scores and greater gains than the rest.

References

Bell, B., & Federman, J. (2013). E-Learning in Postsecondary Education. The Future of Children,23(1), 165-185.

Bigler, A. M., & Hanegan, N. L. (2011, June). Student Content Knowledge Increases After Participation in a Hands-on Biotechnology Intervention. Journal of Science Education and Technology, 20(3), 246-257.

Gibson, W. L., Jr. (1954, December). Improved Teaching Techniques. Journal of Farm Economics, 36(5), 877-882.

Kelly, S. P. (1973, Summer). Effective College Teaching Techniques. Improving College and University Teaching, 21(3), 229-232.

Moore, R. S. (1976, Autumn). Effect of Differential Teaching Techniques on Achievement, Attitude, and Teaching Skills. Journal of Research in Music Education, 24(3), 129-141.

Sentiment Analysis of Online Reviews

Posted by on Oct 18, 2016 in Writing Assignment 3 | No Comments

It is a given that computers aren’t naturally smart. They aren’t able to understand the intricacies of human languages. However, we are able to use machine learning to allow computers to be able to extract data from our language. Sentiment analysis, as the name suggests, allows us to teach computers how to analyze data to return a value that would determine whether a given text is written in a positive, negative, or neutral manner. Using sentiment analysis, we are able to use computers for opinion mining. Opinion mining and sentiment analysis become more possible as social media websites are uploading more data that can be used. Therefore, we will discuss sentiment analysis on reviews that we can find on social media. This has significance not only in a research aspect due to the nature of machine learning, but also in an economic aspect because it will allow business leaders to make more informed decisions using data that is more reliable.

The source of our text will from social media websites such as Yelp because they allow users to crowdsource influence. For example, if you’re looking to purchase something or eat at a specific restaurant, you are able to read the thoughts of past customers rather than asking your friends who have a lower chance of having purchased the item or visited the place. (Liu) More than 70% of readers of reviews say that they’re largely influenced by the reviews of their purchases (Pang et al., 2) While reviews may be impactful on readers, it’s important to acknowledge the existence of “fake” reviews that are fabricated for the benefit of the business as well as spam reviews that are written to advertise another business.

When data mining from social media websites such as Yelp, we can parse the text in different ways for sentiment analysis. By categorizing text using categories, we will be able to use that when we give a sentiment rating (Pak et al.). For example, if we have text that has many words from the positive set of words, we will know that it is written in a positive sense. While this may seem simple, it is more complicated when the human language can use seemingly positive words in a negative sense and vice versa (Vinodhini et al.). For example, the phrase “not bad” is written with a positive intent. This is where we have to take into consideration other cases of sentiment when we’re opinion mining texts online.

Figure 1: Sentiment Analysis using text from a review (Gundy)

Figure 1: Sentiment Analysis using text from a review (Gundy)

There are different types of opinions. There are regular/comparative and explicit/implicit opinions. (Liu) When we parse texts our data, it is wise to determine what types of opinions are in our data set. Regular opinions consist of direct and indirect opinions that either state a sentiment about something or a sentiment as a result of something. Comparative opinions simply compare something over another thing. Explicit and implicit opinions deal with objective and subjective statements that would yield an opinion. Given these different types of opinions, sentiment analysis becomes more difficult and more prone to error if these aren’t accounted for. There are undoubtedly going to be problems with sentiment analysis. As previously mentioned, computers are only as intelligent as we make them to be.

As the texts get longer and more linguistically complex, we will have to evaluate the expressions separately. For example, if a given text has more than one expression, it is difficult to determine the sentiment if the expressions have contradicting sentiments. Therefore, it is better to segment the text into different expressions so that we can determine the polarity of each expression to retrieve the contextual polarity (Wilson et al.). By using the polarity of certain expressions to modify the polarity of others, we will be able to achieve a more precise result that fits the context of the expression itself.

Machine learning, sentiment analysis, and opinion learning will allow us to learn more about the human language on a larger scale due to the processing power of computers. There are many Computer Science and Natural Language Processing problems that need to be dealt with in order to be able to achieve successful results. Nevertheless, it is interesting what the results tell us about our own language. By applying these concepts, we are able to break down our language and subjectively evaluate linguistics by objective means.

Works Cited

1. Liu, Bing. “Sentiment analysis and opinion mining.” Synthesis lectures on human language technologies 5.1 (2012): 1-167.

2. Pak, Alexander, and Patrick Paroubek. “Twitter as a Corpus for Sentiment Analysis and Opinion Mining.” LREc. Vol. 10. 2010.

3. Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and trends in information retrieval 2.1-2 (2008): 1-135.

4. Vinodhini, G., and R. M. Chandrasekaran. “Sentiment analysis and opinion mining: a survey.” International Journal 2.6 (2012).

5. Wilson, Theresa, Janyce Wiebe, and Paul Hoffmann. “Recognizing contextual polarity in phrase-level sentiment analysis.” Proceedings of the conference on human language technology and empirical methods in natural language processing. Association for Computational Linguistics, 2005.

The Definition and Management Risks of Medical Waste

Posted by on Oct 18, 2016 in Writing Assignment 3 | No Comments

Medical waste is known as any solid waste that is generated from health care facilities during diagnosis and the treatment process of humans and animals (Diaz and Savage, 2003). Medical waste can be classified into six categories: sharps, laboratory, animals, pathological radioactive, chemical, and residual after incineration or microwave treatment. Medical waste contains high amount of plastic, and is hazardous and infectious. Due to the potential risks of its management process, the cost of transportation of medical waste  is much higher than that of regular solid waste (Lee et al., 2003). Hepatitis B virus, hepatitis C virus, and human immunodeficiency virus were the three most commonly transmitted diseases to health care workers. Approximately 6 million out of the 35 million health care workers were infected by one of these three diseases, and more than 90% of the cases took place in developing counties (WHO, 2002). Some of the infectious wastes include blood-soaked bandages, discarded surgical gloves and instruments, glassware, removed body parts and organs, and needles that were used to give injections and draw blood. Many of these incidences were caused by uncontained needles from collected wastes (Diaz and Savage, 2003).

Over the past decades, medical waste has become a less significant problem due to the ongoing research on different ways to minimize the possible negative impacts that medical waste can bring to society. Separation techniques have been invented for this purpose: to correctly identify medical waste from health care facilities and separate the infectious materials that require expensive special care from other regular solid wastes (Lee et al., 2003). For instance, waste generated from hospital cafeteria is not medical waste because it is not hazardous and can be disposed through recycling, landfilling, or composting and therefore its cost is low. A study published in 2003 reveals that the cafeteria from hospital B at Massachusetts produced more waste than operating room and emergency room (Table 1) (Lee et al., 2003). After imposing separation techniques, there was a steady decrease in the amount of identified medical wastes (Garcia, 1999). The separation techniques successfully reduce the amount of materials that needed to be treated by incineration, a process that provides oxygen and temperature to convert combustible components into water vapor and carbon dioxide. Incineration emits toxic that are harmful to the human body such as acid gases, dioxins, furans, and heavy metals. The high plastic content creates the even greater potential of acid gas and dioxin emission (Lauber, 1987).

Table 1. Treatment and disposal characteristics of medical waste produced from different waste generation department in hospital B. Table taken from Lee, 2004.

Table 1. Treatment and disposal characteristics of medical waste produced from different waste generation department in hospital B.
Table taken from Lee, 2004.

Another way of spilling medical waste is the accidental breakage of medical equipment. Mercury is a toxic pollutant that can severely harm fish’s central nervous system and causes symptoms such as paralysis, insomnia, and even death. Thermometers, gastroenterology, sphygmomanometers, and many other nonclinical equipment contain mercury. There was an average of 18 mercury spills every year in UCLA. To prevent future accidents, many facilities are encouraging to use more expensive alternatives to avoid toxic pollutants. Although this initiation could be expensive, the hazardous material unit can eventually save time and money by mitigating similar incidence (Environmental Best Practices for Health Care Facilities, 2002).

 

Figure 1. The percentage of different medical equipment that contain mercury from seven Northern Carolina hospitals. Figure taken from Environmental Best Practices for Health Care Facilities, 2002.

Figure 1. The percentage of different medical equipment that contain mercury from seven Northern Carolina hospitals. Figure taken from Environmental Best Practices for Health Care Facilities, 2002.

 

capture-2

Figure 2. UCLA mercury spill frequency form 1997 to 1999. Figure taken from Environmental Best Practices for Health Care Facilities, 2002.

New methods are constantly developing to reduce medical wastes and pollution. Incineration was used to eradicate potential infection that can occur, but it emits toxic substances that are harmful to life. Separation technique was invented as a solution to reduce the use of incineration, but there are potential risks to its management process. The risks of medical waste cannot be completely eliminated, however, more research can still be done on this topic to improve the situation.

 

References

  1. Environmental Best Practices for Health Care Facilities. (Nov 2002). Eliminating Mercury in Hospitals. Environmental Protection Agency. USA.
  2. Diaz, L.F., Savage, G.M. (Dec 2003). Risks and Costs Associated with The Management of Infectious Wastes. WHO/WPRO. Philippines.
  3. Garcia, R. (1999). Effective Cost-Reduction Strategies in the Management of Regulated Medical Waste. Association for Professionals in Infection Control and Epidemology, Inc. New York.
  4. Lauber, J.D. (Feb 1987). New Perspectives on Toxic Emissions from Hospital Incinerators. The New York State Legislative Commission on Solid Waste Management Conference on Solid Waste Management & materials Policy. NY.
  5. Lee, B.K., Ellenbecker, M.J., Moure-Ersaso, R. (Oct 2003). Alternatives for treatment and disposal cost reduction of regulated medical wastes. Elsevier. Universtiy of Ulsan from South Korea, and University of Massachusetts, USA.
  6. WHO. (Jan 2002). The World Health Report 2002: Reducing Risks, Promoting Healthy Life. World Health Organization. Geneva, Switzerland.

The Vast Stars In The Universe and Some of Their Unique Properties

Posted by on Oct 18, 2016 in Writing Assignment 3 | No Comments

With the vast cosmos ever so present in the vision of the most complex telescopes available on the Earth, we will always see an abundance of stars. As such there will always be vast differences amongst these stars due to the fact that they are all originally different stars. They each have their primary differences such as size, shape (when observed very closely), mass, color, temperature, and magnetic properties. Consequently these different property stars get assigned different names and types depending on their key features, primarily including temperature, size, and color. For example, “a star’s color depends on its surface temperature” and these colors varying from white, yellow, orange, red, blue, and some other slight variations, along with their different magnitudes (Star Dome).

Another property concerning stars that involves their shape is their “smoothness” as they “are not smooth” and are “covered by granulation pattern(s) associated with the heat transport by convection” and these “convection related surface structures have different size, depth, and temporal variations” (Chiavassa & Bigot, 2015). Evolved stars such as Red Super Giants (RSG) display this underlying granulation pattern, due to their “large diameter, proximity, and high infrared luminosity” along with “effective temperatures lower than ~4000 K” (Chiavassa & Bigot, 2015).

Another distinguishing trait of a star is its mass and the properties that arise due to it. Neutron stars and other superdense objects “often spin fast and sweep a pulse of radio emission across us with each turn” and these become known as pulsars, as “they draw their radio energy from a magnetic braking mechanism that is gradually slowing down their spin” (A Magnetar In Sheep’s Clothing, 2008). Pulsars are unique in the sense that they do “not undergo any significant field decay during their lifetimes” (Konar & Bhattacharya, 1999). A different kind of neutron star, that spins slower and draws emitted energy from a different source, which is the intense magnetic field, became known as a magnetar and are known to be the most magnetic objects in the universe (A Magnetar In Sheep’s Clothing, 2008).

Neutron stars showcase an evolutionary state beyond isolated radio pulsars, and this evolutionary link can be seen as a unified picture of the evolution of their spin and their magnetic field (Konar & Bhattacharya, 1999). Another variation that take part are the High-amplitude delta scuti stars, and they have different rates of pulsating, as there has to be “effective temperatures” for “diffusion and other processes” to cause segregation of chemical elements, thus modifying the “excitation of the pulsations” (Handler , 2009). Overall the stars show these unique properties and consequently some are assigned defining names as a result. These properties haven’t been fully explored yet and are still questioned to this day as we try to learn more about them.

Figure 1: Evolution of the surface inhomogeneities during the stellar evolution

Figure 1: Evolution of the surface inhomogeneities during the stellar evolution (Chiavassa & Bigot, 2015).

Works Cited:

A Magnetar In Sheep’s Clothing.” Sky & Telescope 115.6 (2008): 14.

Chiavassa, A., and L. Bigot. “Stellar Granulation And Interferometry.” EAS Publications Series 69/70.(2015): 151-175.

Handler, Gerald. “Delta Scuti Variables.” AIP Conference Proceedings 1170.1 (2009): 403-409.

Konar, Sushan, and Dipankar Bhattacharya. “Magnetic Field Evolution Of Accreting Neutron Stars — II.” Monthly Notices Of The Royal Astronomical Society 303.3 (1999): 588.

Star Dome.” Astronomy 44.6 (2016): 38-41.

The Economic Benefits of Sustainable Building

Posted by on Oct 18, 2016 in Writing Assignment 3 | No Comments

Green building offers more than environmental benefits to society. Various studies have shown that green building has significant economic benefits as well. Figure 1 shows and categorizes some of these benefits. Green building lowers the costs for energy, waste disposal, water, operations, maintenance, and saves money by increasing productivity and improving the indoor environmental quality of the building (Spivey, 2004). Since green rating organizations have only been around for the past two decade, there are no statistics that demonstrate the actual economic benefits obtained throughout a sustainable building’s lifecycle. Instead, expenses and savings are estimated based on sets of data collected in the past.

advantagesofgreenbuilding

Figure 1: Advantages of Green Building, Source: Dailey, J. (2013, May 07). An Introduction to the Cost Benefits of Green Buildings. Retrieved from http://ny.curbed.com/2013/5/7/10246368/an-introduction-to-the-cost-benefits-of-green-buildings

 

Several studies have been conducted to prove that green building is economically beneficial in the long run despite being initially more expensive than conventional building. The Sustainable Building Task Force, a former green building organization, found that a building with green features costs on average 2% more to build than a conventional one, but pays back the investment by ten times over a period of 20 years (Spivey, 2004). Green building cuts material, design, water and energy costs due to its sustainable characteristics. For example, sustainable buildings include natural systems that help reduce the building’s energy and water usage; therefore, expenses are reduced. By conserving water, native landscaping cuts costs on water and maintenance. Likewise, natural pollution prevention systems cut costs for waste disposal. In addition, the design of a sustainable structure eliminates expenses in a few stages of the construction process. Green building requires efficiency in infrastructure, therefore, savings are obtained from the minimized use of sewer lines, utility lines, and electrical equipment (Nalewaik & Venters, 2009). By incorporating water and energy saving technologies, green building reduces future water and energy costs. Green roofs, also known as vegetation roofs, are natural technologies that help decrease the energy used in a building by reducing the need for heating and air conditioning. The Energy Information Administration states that a green roof reduces annual household energy consumption by 1% (Blackhurst, Hendrickson, and Matthews, 2010).

Furthermore, studies demonstrate that economic benefits arise from the improvement of indoor environmental quality and increase in productivity in office buildings. Buildings with low indoor environmental qualities can trigger allergies, sneezing and drowsiness. As a result, productivity among employees decreases. Studies have shown that green features increase occupant health and productivity by 1-7%. It is estimated that a 1% increase in productivity is equal to $600-700 per employee (Spivey, 2004).  A study conducted in 1998 reported eight cases that showed up to 16% improvement in productivity in consequence of relocating employees to new facilities designed according to green building codes. It was also calculated that a 1% increase in employee productivity would equal to a 15% decrease in property costs since employee cost is almost 15 times larger than the share of property cost (Ries, Bilec, Gokhan, and Needy, 2006).

In addition to the economic benefits green building offers the owner and tenant, it also proves profitable to the developer of the building. The free market values sustainable structures. A study conducted in 2009 shows that tenants prefer green buildings over conventional buildings since LEED buildings had an 8% higher occupancy rate. Since demand for green buildings is high, developers are able to sell them at a higher price (Tolan, 2012).

Despite being costlier than conventional building during the construction phase, green building has been estimated to be economically beneficial in the long run in numerous of studies. Green building has the ability to reduce costs and create savings for the tenant, owner, and developer. Its evident advantages have contributed to its increasing popularity in the construction world.

 

References 

Blackhurst, M., Hendrickson, C., & Matthews, H. S. (2010). Cost-Effectiveness of Green Roofs. Journal Of Architectural Engineering, 16(4), 136-143.

Nalewaik, A., & Venters, V. (2009). Cost benefits of building green. Cost Engineering, 51(2), 28.

Ries, R., Bilec, M. M., Gokhan, N. M., & Needy, K. L. (2006). The economic benefits of green buildings: a comprehensive case study. Engineering Economist, 51(3), 259+. Retrieved from http://go.galegroup.com/

Spivey, A. (2004). Going green saves over time. Environmental Health Perspectives, 112(5), A276. Retrieved from http://go.galegroup.com/

Tolan, P. (2012). GOING-GOING-GREEN: STRATEGIES FOR FOSTERING SUSTAINABLE NEW FEDERAL BUILDINGSPublic Contract Law Journal, 41(2), 233-295. Retrieved from http://www.jstor.org/stable/41635335