Gap Junctions: The complex bridge of the body
Gap junctions are a crucial component of cell to cell communication that connects cells and can transport a multitude of products. They can be found in all different types of body cells and assist with a variety of functions
Gap junctions have been found to be present in a variety of bone cells (Doty, 1981). Gap junctions, as intercellular bridges, connected adjacent bone cells and suggest they help in the control/coordination of bone cell activity. In most cells, gap junctions connect the cytoplasm of two cells and can transfer hydrophilic molecules. However, if an immune system response surfaces, rat cells can shut off these gap junctions in order to minimize the spread of a foreign substance (Fraser, 1987).
Gap junctions are even used in cells related to hormone distribution. Thyroid cells were found to reconstruct gap junctions in response to the hormone TSH. However, when the protein Kinase-C was activated, the functional activity of the gap junctions reacted negatively (Munari-Silem, 2009). Showing how different aspects of cell communication can intertwine, gap junctions can be affected by hormones. In fact, gap junctions can have different selectivity because the connexins subunits that form gap junctions can be mixed and matched, leading to a whole realm of complexity on what passes through the junctions (Kumar, 1996).
Sources
Doty, Stephen B. “Morphological Evidence of Gap Junctions between Bone Cells.” Calcified Tissue International 33.1 (1981): 509-12.
Fraser, S., C. Green, H. Bode, and N. Gilula. “Selective Disruption of Gap Junctional Communication Interferes with a Patterning Process in Hydra.” Science 237.4810 (1987): 49-55.
Munari-Silem, Yvonne, Christine Audebet, and Bernard Rousset. “Hormonal Control of Cell to Cell Communication: Regulation by Thyrotropin of the Gap Junction-Mediated Dye Transfer between Thyroid Cells.” Endocrinology 128.6 (1991): 3299-309.
Kumar, Nalin M., and Norton B. Gilula. “The Gap Junction Communication Channel.” Cell84.3 (1996): 381-88.
Analyzing and Interpreting Genome Sequences
Finding mutations for genetic disorders involves not only the study of genes and their modes of inheritance but the cooperation of the scientific community and the way genomic data is collected and sequenced. Communication between scientific communities is crucial to the propelling understanding of genetic disorders. The genome is universal so gene expressions and mutations in a few individuals can give information about certain genes that are found in all humans.
Source: Lander, ES. Initial sequencing and analyzing of the human genome. Nature 2001.
Genomic sequencing of non-vertebrates such as humans and mice, as shown in Figure 1, has been paramount in learning more about genes. Since the human genome sequencing was completed, researchers have worked to sequence exons, which are DNA sequences that code for proteins (Lyon 2012). This is called exome sequencing, short for “a set of exons in a genome” (Ng 2008). Sequencing genomes were very expensive initially. Work in that field was tied down by costs. As technology improved, a way to sequence genomes cheaply was found in 2007, making the field exome sequencing easier to research (Albert 2007). In 2008, the causes for syndromes such as Bartter syndrome were found through exome sequencing (Choi 2009). Genome and exome sequencing became more available through companies founded with the goals of sequencing them at low costs. Genome or exome sequencing are used to find genetic causes of disorders such as diabetes and autism, to diagnose patients, and to study pedigrees (Bonnefond 2010, Hedges 2009).
Unfortunately, exome sequencing cannot be used to learn about all genetic disorders. Some disorders are caused by mutations in the non-coding regions of DNA (Cartault 2012). It is estimated that genetic causes are found in only 10% to 50% of cases by exome sequencing, but it is difficult to tell because most researchers that fail to find a genetic cause through exome sequencing do not publish that they failed (Lyon 2012).
Genome sequencing is more readily available, but there is a tremendous amount of data that needs to be analyzed and interpreted. The field of bioinformatics deals with analyzing and interpreting genomic data. Current technology such as software tools should be improved to aid in sequencing these data. Software tools have limited ability because they can only analyze one type of data that came from one type of experiment for sequencing (Lyon 2012). However, there is not enough support to improve these tools. As a result, a genome sequence that costs $1,000 is in reality much more expensive (Lyon 2012). Analyzing the sequence will cost $20,000 to $100,000 (McPherson 2009). The astronomical cost caused those working in genomics to seek better technology so newer software programs were released (Lyon 2012). The problem still remains, however, that larger amounts of data need to be analyzed more quickly and accurately with practical costs.
There is a more practical way of analyzing and interpreting genomic data. Instead of each individual or team analyzing an entire genome, several groups can analyze one genome and share data with each other. In addition, genomic data and its analyses can be made available to the entire scientific community to spread knowledge. Collaboration in the scientific community has produced amazing results in the past in sequencing the human genome for the first time. It can work again to analyze the genome. There may be privacy concerns because there are several ways to thinking about who “owns” a genomic sequence from an individual (Lyon 2012). Companies that store, analyze and interpret the sequence may own it or the individual that the sequence came from may own it. Currently, the consensus is that the individual the genomic sequence came from owns the sequence (Lyon 2012). There may also be other concerns between data sharing among scientists. However, it is undeniable that collaborating to tackle the problem of analyzing and interpreting genomic sequences to find the genetic basis for genetic disorders is a smart idea.
Works Cited
Albert TJ, Molla MN, Muzny DM, Nazareth L, Wheeler D, Song X, Richmond TA, Middle CM, Rodesch MJ, Packard CJ, Weinstock GM, Gibbs RA: Direct selection of human genomic loci by microarray hybridization. Nat Methods. 2007, 4: 903-905. 10.1038/nmeth1111.
Bonnefond A, Durand E, Sand O, De Graeve F, Gallina S, Busiah K, Lobbens S, Simon A, Bellanné-Chantelot C, Létourneau L, Scharfmann R, Delplanque J, Sladek R, Polak M, Vaxillaire M, Froguel P: Molecular diagnosis of neonatal diabetes mellitus using next-generation sequencing of the whole exome. PLoS One. 2010, 5: e13630-10.1371/journal.pone.0013630.
Cartault F, Munier P, Benko E, Desguerre I, Hanein S, Boddaert N, Bandiera S, Vellayoudom J, Krejbich-Trotot P, Bintner M, Hoarau JJ, Girard M, Génin E, de Lonlay P, Fourmaintraux A, Naville M, Rodriguez D, Feingold J, Renouil M, Munnich A, Westhof E, Fähling M, Lyonnet S, Henrion-Caude A: Mutation in a primate-conserved retrotransposon reveals a noncoding RNA as a mediator of infantile encephalopathy. Proc Natl Acad Sci USA. 2012, 109: 4980-4985. 10.1073/pnas.1111596109.
Choi M, Scholl UI, Ji W, Liu T, Tikhonova IR, Zumbo P, Nayir A, Bakkaloglu A, Ozen S, Sanjad S: Genetic diagnosis by whole exome capture and massively parallel DNA sequencing. Proc Natl Acad Sci USA. 2009, 106: 19096-19101. 10.1073/pnas.0910672106.
Hedges DJ, Burges D, Powell E, Almonte C, Huang J, Young S, Boese B, Schmidt M, Pericak-Vance MA, Martin E, Zhang X, Harkins TT, Züchner S: Exome sequencing of a multigenerational human pedigree. PLoS One. 2009, 4: e8232-10.1371/journal.pone.0008232.
Lander ES, Linton LM, Birren B, Nusbaum C, Zody MC, Baldwin J, Devon K, Dewar K, Doyle M, FitzHugh W, Funke R, Gage D, Harris K, Heaford A, Howland J, Kann L, Lehoczky J, LeVine R, McEwan P, McKernan K, Meldrim J, Mesirov JP, Miranda C, Morris W, Naylor J, Raymond C, Rosetti M, Santos R, Sheridan A, Sougnez C, et al: Initial sequencing and analysis of the human genome. Nature. 2001, 409: 860-921. 10.1038/35057062.
Lyon GJ, Wang K: Identifying disease mutations in genomic medicine settings: current challenges and how to accelerate progress. Genome Medicine. 2012, 4: 58. 10.1186/gm359.
McPherson JD: Next-generation gap. Nat Methods. 2009, 6: S2-5. 10.1038/nmeth.f.268.
Ng PC, Levy S, Huang J, Stockwell TB, Walenz BP, Li K, Axelrod N, Busam DA, Strausberg RL, Venter JC: Genetic variation in an individual human exome. PLoS Genet. 2008, 4: e1000160-10.1371/journal.pgen.1000160.
Different Theories That Attempt To Describe and Explain The Universe
Many scientists have attempted to explain the universe we reside in through many different theories. None of them are absolute of course, but some tend to be more believable than others. While there is no definite evidence that fully supports any single theory, based on what we see so far, we can only assume that one is correct depending on how the theory’s intricacies match up to and are in accordance with mathematical equations that describe the governing laws of physics. Many theories and models can be explored, with the prevalent ones being the multiverse theory, Quantum Field Theory, the Anisotropic model, and the currently accepted, Big Bang Theory.
One of these theories consists of an anisotropic model of the universe. This model explores a role “in the study of cosmic highly excited strings in the early universe” (Sepehri et al., 2015). These strings mentioned became an important part to this theory because they were supposedly created during the phase transition after the Big Bang explosion, with the temperature lowering, with them “then decay[ing] to standard model particles at the Hagedorn temperature” (Sepeheri et al., 2015). Essentially the theory shows that vector string tachyons, a big rip singularity, control the expansion of the anisotropic universe, whilst shifting from the non-phantom phase to the phantom phase with the phantom-dominated era of the universe accelerating and ending up in a big rip singularity (Sepehri et al., 2015).
Another theory, Quantum Field Theory, asserts that quantum fields propagate on a classical background, defining quantum phenomena “in a regime where the quantum effects of gravity do not play a dominant role, but the effects of curved spacetime may be significant” (Tavakoli and Fabris, 2015). This theory of quantum fields becomes invalid in classical curved spacetime, with regimes arbitrarily close to the classical singularities. Here the spacetime curvatures become extremely small, relative on Planckian scales and so the quantum effects of gravity are no longer negligible (Tavakoli and Fabris, 2015). This theory is explained through many complex mathematical equations and finds some foothold as a plausible theory explaining the creation of particles in a cyclic universe.
A microgravity environment for the central nervous system allows us to explore the beginnings of mankind, in a purely theoretical sense. While this article doesn’t primarily discuss the origins of the universe, it relates to the beginnings of mankind and draws a connection to the universe. The connection is made with simply the limbic system, with connections between the brainwaves, oscillations and our soul, with the soul being our origin and the greater limbic system being the seat of the soul. The article asserts that everything moves in a wave-like pattern, where everything is oscillating, and this idea is related to parts of the human bodies that create wave-like oscillations, such as “brain waves, heart rate, blood pulsation, and pressure, respiration, peristalsis for most living creatures and oscillations or waves for the whole of the universes contents” (Idris, 2014). These relations highlight the basis of this theory, which has more to do with similarities as opposed to mathematical logic and proofs.
One of the better-known theories proposed is the theory of the multiple universes, in which an infinite number of universes exist that accommodate all possible scenario of events, called the multiverse theory. The theory presents a “many-worlds view, in which all possible outcomes of a quantum measurement are always actualized, in the different parallel worlds, and a one-world view, in which a quantum measurement can only give rise to a single outcome” (Aerts & Bianchi, 2014). This is made possible by many quantum measurements happening frequently, thus allowing for multiple pictures. This theory draws some basis from the equations from quantum theory that describe waves, however the multiverse theory assumes an illusion of just one image being created by the results of quantum theory (Vaidman, 2015).
Currently there are many theories and attempts being made to describe the universe, but they are immensely difficult to explain and involve many intricacies. Even with all the specificities of each theory, most fall short in some aspect and due to the lack of complete knowledge, we cannot fully accept a theory. The Big Bang Theory explains many of the phenomena we have come to known and understand and explains them well according to our knowledge thus far, but we cannot fully accept it yet. For the time being however, it is the currently accepted theory.
Works Cited
Aerts, Diederik, and Massimiliano Sassoli de Bianchi. “Many-Measurements Or Many-Worlds? A Dialogue.” Foundations Of Science 20.4 (2015): 399-427.
Idris, Zamzuri. “Searching For The Origin Through Central Nervous System: A Review And Thought Which Related To Microgravity, Evolution, Big Bang Theory And Universes, Soul And Brainwaves, Greater Limbic System And Seat Of The Soul.” Malaysian Journal Of Medical Sciences 21.4 (2014): 4-11.
Sepehri, Alireza, Anirudh Pradhan, and Hassan Amirhashchi. “Removing The Big Rip Singularity From Anisotropic Universe In Super String Theory.” Canadian Journal Of Physics 93.11 (2015): 1324-1329.
Tavakoli, Yaser, and Júlio C. Fabris. “Creation Of Particles In A Cyclic Universe Driven By Loop Quantum Cosmology.” International Journal Of Modern Physics D: Gravitation, Astrophysics & Cosmology 24.8 (2015): -1.
Vaidman, Lev. “The Emergent Multiverse: Quantum Theory According To The Everett Interpretation.” British Journal For The Philosophy Of Science 66.2 (2015): 465-468.
Finding their way home: Different ways that cell messengers reach their destination
Cell communication contains various methods in order for its messengers to reach their target cells for both eukaryotic and prokaryotic organisms.
In bacterial cells, we know that they use quorum sensing (QS) as a form of cell communication. This method of communication mirrors how hormones are used in eukaryotic cells. A strand of E. Coli that uses QS was actually found to be able to communicate with eukaryotic cells (Sperandio, 2003). The fact that it can replicate the shapes of our hormones in order to infiltrate our cells shows how advanced bacteria can adapt in order to survive.
However, hormones in some species have been found to work around the specificity for a certain cell. In certain rat ovarian granulosa cells and mouse myocardial cells, it was discovered that hormones for a certain cell were able to reach their target through an unrelated cell and their intercellular communication (Lawrence, 1978). The non-target cell and target cell communicated through a mediator that brought the hormones to the target cell.
In long distance cell communication, it was found that exosomes are a mediator that assist different messengers (Bang, 2012). Exosomes are vesicles that can carry a variety of objects. It was found they could also carry proteins, messenger RNAs and microRNAs. And since exosomes are secreted by a variety of cell types, they can be mediators for all different kinds of pathways and communications between the many cells of the body.
Plant cells also have its own way of long distance communication. Similar to the human body and its circulatory system, plants have phloem transport tubes that connect the most distant organs of plants (Kehr, 2007). The messengers that plants use include RNAs that correspond with physiological processes that are crucial to the plant. The RNA can be translated to important proteins that help the plant function and protect itself.
References
Sperandio, V., A. G. Torres, B. Jarvis, J. P. Nataro, and J. B. Kaper. “Bacteria-host Communication: The Language of Hormones.” Proceedings of the National Academy of Sciences 100.15 (2003): 8951-956.
Lawrence, Theodore S., William H. Beers, and Norton B. Gilula. “Transmission of Hormonal Stimulation by Cell-to-cell Communication.” Nature 272.5653 (1978): 501-06.
Bang, Claudia, and Thomas Thum. “Exosomes: New Players in Cellâcell Communication.” The International Journal of Biochemistry & Cell Biology 44.11 (2012): 2060-064.
Kehr, J., and A. Buhtz. “Long Distance Transport and Movement of RNA through the Phloem.” Journal of Experimental Botany 59.1 (2007): 85-92.
“Transportation of Photosynthates in the Phloem.” Image: Translocation to the Sink. Boundless, n.d.
Trusted Traveler Programs
Current aviation security procedures screen all passengers uniformly. Changing the amount of screening some individuals receive has the potential to relieve the burden of frequent travelers while making the screening process more efficient. Trusted traveler programs exist so that some travelers pre identified as “low risk” undergo expedited screening (Caulkins). This allows security resources to be shifted from low risk passenger to the unknown risk population. However, fears arise that terrorists may exploit these programs to harm the community around us.
Trusted traveler programs are one of the many attempts the United States Customs and Border Protection is using to make the international arrivals process faster and more convenient for travelers (Chow, Dreyer). They simplify traveling by eliminating paper forms, expanding the use of Automated Passport Control kiosks, and incorporating mobile apps for travelers. Mobile Passport Control allows travelers to fill out customs declaration forms and biographic information before the passenger even lands (Drury, Ghylin).
NEXUS is a program offered by both the United States and Canadian Border Protection agencies that allows registered users accelerated clearance when entering the US or Canada. The SENTRI program is also similar to the NEXUS program except it offers expedited clearance within the southern US land port of entries and Mexico. FAST is another program that caters to low risk truck shipments between the US from Canada and Mexico (Perisco, Todd).
All of these programs require applicants to undergo an intensive background check with government databases and intelligence as well as in person interview with a customs officer.
These new programs are growing in popularity with over 350,000 people now belonging to NEXUS and are receiving anywhere from 10,000 to 12,000 applicants a month. The SENTRI program is also very successful with over 200,000 people enrolled. By pre screening travelers beforehand, these programs are able to reduce wait times for travelers anywhere from 10 minutes to 2 hours (Richardson, Cave).
References:
Caulkins JP (2004) CAPPS II: A risky choice concerning an untested risk detection
technology. Risk Anal 24(4):921–924
Chow J, Chiesa J, Dreyer P, Eisman M, Karasik TW, Kvitky J, Lingel S, Ochmanek
D, Shirley C (2005) Protecting commercial aviation against the shoulder-fired missile threat. RAND Corporation, Santa Monica
Drury CG, Ghylin KM, Holness K (2006) Error analysis and threat magnitude for
carry-on bag inspection. Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting. 1189–1193
Persico N, Todd PE (2005) Passenger profiling, imperfect screening, and airport
security. Am Econ Rev 95(2):127–131
Richardson DW, Cave SB, La Grange L (2007) Prediction of police officer
performance among New Mexico State Police as assessed by the personality assessment inventory. J Police Crim Psych 22:84–90
The Socio-Political Influence of Television
Television has proven to be one of the most powerful senders of communication, primarily due to its combination of both artistic freedom and sphere of influence. It has become a staple of the American home, and with the introduction of streaming platforms and video-on-demand, its omnipresence has turned into access to any show at any time all within the click of a button (Lotz, 57). The allure of escapism and entering a storyline that is both far but relevant enough to reality is why television has cemented itself into day-to-day activities (Hamilton, 403). In addition, the shows and commercials seen throughout our lifetime act as catalysts for the trends and movements that occur in everyday society. The issues rooted in a show’s premise – spanning from race, religion, class, and gender — can influence and even change the way an audience thinks about said topic. Moreover, the content produced on television has also been dictated by the changing social climate of both national and global societies. One major social movement that has gained traction over the past decade is visibility and human rights for the LGBT community, “Moreover, some digital television tools – morphing, for example – are very well suited for representing continuity, fluidity, and the implosive destruction of ‘classic’ binary oppositions” (Reifova, 1238). Television contains a cultural sensitivity not found in print media like newspapers and magazines, and is suited accordingly to meet the constant evolution of a multi-faceted audience.
Television has broken high ground on even bigger social and political movements, resulting from a more liberal, reactionary audience. Feminism is a trademark example of changing viewership and a demand for anti-sexist programming. For instance, in archiving British television, the current content placed under “women’s programming” is no longer in sync with older shows containing material focused on domestic chores such as cooking, fashion, and child care (Moseley, 156). This is because gender stereotypes in the Western world are no longer stagnant; they have achieved of fluidity that can be attributed to a more socially-motivated generation of teens and adults. In addressing political issues, television finds itself lost between sensationalist headlines and neutral, factually-sound news. Channels like C-SPAN and PBS have limited coverage and deteriorating funding because of: 1) a lack of the public’s interest 2) a very dry perspective on governmental policies. Additionally, since government T.V. does not generate an exorbitant amount of revenue, there are simply less staff and reporters working on this area of media (Gormley Jr., 357). The social effect, however, is that there is a less-educated public on issues that are important to their overall lives and an industry driven on flashy, ephemeral content.
Works Cited:
- Reifová, Irena. “’It Has Happened Before, It Will Happen Again’: The Third Golden Age of Television Fiction.” Sociologický Časopis / Czech Sociological Review, vol. 44, no. 6, 2008, pp. 1237–1238. www.jstor.org/stable/41132684.
- Lotz, Amanda D. “What Is U.S. Television Now?” The Annals of the American Academy of Political and Social Science, vol. 625, 2009, pp. 49–59. www.jstor.org/stable/40375904.
- Hamilton, Robert V., and Richard H. Lawless. “Television Within the Social Matrix.” The Public Opinion Quarterly, vol. 20, no. 2, 1956, pp. 393–403. www.jstor.org/stable/2746311.
- Moseley, Rachel, and Helen Wheatley. “Is Archiving a Feminist Issue? Historical Research and the Past, Present, and Future of Television Studies.” Cinema Journal, vol. 47, no. 3, 2008, pp. 152–158. www.jstor.org/stable/30136123.
- Gormley, William T. “Television Coverage of State Government.” The Public Opinion Quarterly, vol. 42, no. 3, 1978, pp. 354–359. www.jstor.org/stable/2748298.
Fractal Artwork: Dimension and Complexity as a Guide for Aesthetics
Jackson Pollock is an artist famously known for his technique of paint dripping to create various works of art, using a pouring technique to create designs as opposed to more Euclidean shapes produced by brush strokes. Due to this method of painting and the results it produces, Pollock’s works can be described to be fractal images, with his style being “Fractal Expressionism” [1]. Characterizing these fractal images comes in the form of fractal dimension, having a value that ranges between 1 and 2. A dimension of 1 would have no fractals, and values closer to 1 tend to be sparser; on the other hand, a dimension of 2 would be completely filled, and values near this would be more intricate and complex [2].
Determining the fractal dimension of a piece is done by the box counting method. First, a computer generated mesh consisting of identical squares, or boxes, is placed on a digital image of the painting. Next, scaling qualities of the fractal pattern are determined by calculating the proportion of filled squares to empty ones. The number of occupied squares, given by N(L), is compared to the width L, where N(L) is a function of this width. N(L) scales according to the power law that N(L) is asymptotic to L-D, where D is the dimension. Then, a scaling plot is created between –logN(L) and logL. Finally, if the painting is of fractal pattern, this plot will produce a straight line, where the dimension D is the gradient of this straight line [2].
Using this method, we can characterize the fractal art of Pollock, and any other forms of fractal images. Analyzing Pollock’s work in this way, mathematicians were able to identify “periods” in his work, in which his artwork would stay near a given value of fractal dimension during a time period of a few years [1]. In fact, the fractal analysis of characteristics in Pollock’s work shows that the fractals produced are not simply a consequence of paint being poured, but moreover by his own technique, involving deliberate and specific pouring, along with body motions. Furthermore, Pollock’s work in this analysis meets 6 specific criteria. Some of these include having two fractal sets within his pieces, due to his techniques, as well as fractal patterns occurring over distinct length scales [1]. These criteria can then be used to identify Pollock’s work, as other artists trying to replicate his images will not meet the criteria when analyzed.
Aside from Pollock’s work, fractal dimension characteristics can also be used as a means to quantify aesthetics. As stated previously, pictures range in fractal dimension value from 1 to 2; determining which values are considered aesthetically pleasing to viewers could help ascribe a number to aesthetic works. One study conducted used participants to give preference when shown a number of fractals, each with different dimension [2]. This particular study had shown that participants preferred a fractal dimension of around 1.3 to 1.5, which was consistent throughout multiple types of fractals, including natural images and parts of Pollock’s works. It is important to note that a control was held, showing that preference was not dependent density, but over complexity alone. A different study compared these various types of fractals, which found that natural images were most often considered the most beautiful, as well as having the highest fractal dimensions [3]. Finally, one last study had asked participants to rate sets of images, each having one of two fractal dimensions and one of two Lyapunov exponents [4]. This exponent quantifies the unpredictability of the process used to generate fractal images; in this case, the range falls between 0.01 and 0.84 bits per iteration, with the higher the number producing a more chaotic image. This study found that the mean preference was 1.26 in fractal dimension, and 0.37 for the Lyapunov exponent, reflective of many natural images.
These various studies all attempt to affix a number as to what is deemed to be aesthetic. By isolating a variable such as fractal dimension and running tests to see what is preferred, we can begin to find what constitutes an aesthetic fractal image. However, these studies are still rooted in subjectivity, as well as being confined to very specific images that can be analyzed as data. Therefore, quantifying art in this fashion does not define what art truly is. Regardless of this fact, it does help to bridge a gap and allow some form of rating to be applied to a mostly qualitative field.
References:
[1] Taylor, Robert P., et al. “Authenticating Pollock paintings using fractal geometry.” Pattern Recognition Letters 28.6 (2007): 695-702.
[2] Spehar, Branka, et al. “Universal aesthetic of fractals.” Computers & Graphics 27.5 (2003): 813-820.
[3] Forsythe, Alex, et al. “Predicting beauty: fractal dimension and visual complexity in art.” British journal of psychology 102.1 (2011): 49-70.
[4] Aks, Deborah J., and Julien C. Sprott. “Quantifying aesthetic preference for chaotic patterns.” Empirical studies of the arts 14.1 (1996): 1-16.
[5] Jones-Smith, Katherine, and Harsh Mathur. “Fractal Analysis: Revisiting Pollock’s Drip Paintings.” Nature 444.7119 (2006): E9-E10. Academic Search Complete. Web. 30 Nov. 2016.
The Sistine Chapel Restoration Controversy and Its Implications in the Field of Art Conservation
Ethical Controversies
Art conservation is a vital field in the preservation of our cultural heritage, and this importance does not come without its fair share of controversy. Conservators and restorers alike must deal with many ethical dilemmas when approaching the issue of whether to clean and restore a work of art, or let it degrade naturally. After all, most artists make their work without the deliberate intention of expecting it to last for centuries after their passing, and like most things in life, artwork is temporary and subject to degradation. However, these artistic intentions are never analyzed as closely as they should: when it comes to debate regarding art conservation, it usually concerns not if art should be conserved, but how. One of the most famous restoration controversies was that of Michelangelo’s frescoes on the ceiling of the Sistine Chapel, which were completed in the early 16th century. A fresco is a type of painting created on fresh plaster using water-based pigments, typical for mural paintings on large walls. In the essence of the fresco, the colors set in with the plaster as both dry creating a beautiful effect, but it also complicates restoration efforts due to the nature of plaster.
Restoration Process
For the restoration process, restorers worked to fill in cracks made in the plaster, as well as clean the surface in a non-harmful way. Cleaning began in 1980, after the Vatican Conservation Laboratory discovered that deteriorating animal glue, previously attached by restorers from the late 1500s, were detaching pigments from the plaster (Academy of Arts and Sciences). This hot animal glue had also been applied again in the 1700s in order to heighten colors that had been previously obscured by dust and smoke from candles, oil lamps, and braziers frequently used in the chapel for hundreds of years. Not only were such glue materials detaching pigments, they had also lost their transparency and darkened into a brown color, further masking Michelangelo’s original painting.
To begin cleaning, restorers used a mixture of ammonium bi-carbonate, sodium bi-carbonate, and antibacterial antifungal agent combined with carboxymethylcellulose and distilled water, which had been extensively tested by the Vatican Conservation Laboratory and used successfully on other mural paintings in Europe (Academy of Arts and Sciences). The fresco was then divided into sections of approximately 30 square centimeters, to be cleaned individually and carefully monitored. Each designated section was dusted and washed with distilled water before the cleaning mixture was applied with either a cotton or cellulose putty. After 30 minutes, the mixture was removed, and the same process would be repeated the next day.
Aside from the cleaning process, restorers had to remove overpaint done by previous restorers and fill in cracks made in the plaster (Elam). Because of the importance of the paintings and the controversy surrounding the restoration work, it was emphasized that “no color [was] added to the original frescoes, no reconstructive painting [was] done, and no color brightener [was] used” (Academy of Arts and Sciences).
Results and Implications for Future Restorations
The results of the cleaning process revealed vibrant colors Michelangelo had originally used, completely changing what art historians had previously thought about his painting habits. Michelangelo was widely known for his muted colors in the beautiful proportions and renderings of his paintings, which historians had attributed to his work as a sculptor (Academy of Arts and Sciences). If it had not been for the restoration process, this important information about Michelangelo’s painting habits would not have been revealed. What was thought to be his signature “somber color palette” was actually the opposite, and colors used were quite typical of other mural painters from his time (Kimmelmann). Even with all the controversy that comes with art conservation, its results unveil discoveries into the connecting fields of art history and science, and altogether, preserve the historical documents that are great works of art.
Works Cited
“Art Restoration: The Myth and the Reality | ConservArt.” ConservArt. N.p., n.d. Web. 30 Nov. 2016.
“A View from the ‘Ponte’.” The Burlington Magazine, vol. 129, no. 1016, 1987, pp. 707–708. www.jstor.org/stable/883211.
Elam, Caroline. “Michelangelo and the Sistine Chapel. Rome, Vatican.” The Burlington Magazine, vol. 132, no. 1047, 1990, pp. 434–437. www.jstor.org/stable/884339.
Kimmelmann, Michael. “Review/Art; After a Much-Debated Cleaning, A Richly Hued Sistine Emerges.” The New York Times. The New York Times, 14 May 1990. Web. 29 Nov. 2016.
“Saturday Afternoons at the Academy.” Bulletin of the American Academy of Arts and Sciences, vol. 43, no. 8, 1990, pp. 7–11. www.jstor.org/stable/3824775.
Finding the mutations that cause genetic disorders by studying genes and their modes of inheritance
How are mutations for genetic disorders found? There are long and complicated processes to identify the cause of genetic disorders. Three processes discussed in this paper are identifying mutation types, studying the mode of inheritance of genes, and describing universal phenotypic symptoms of genetic disorders for diagnosis.
Not all mutations are alike. A type of mutation called silence mutation codes for the same protein as the normal gene sequence would code for. There would be no consequences for the individual. However, many severe genetic disorders are caused by Loss of Function (LoF) variants (Lyon 2012). LoF variants interfere with protein function by not coding for a protein or decreasing the function of a protein (Lyon 2012). They include nonsense mutations, in which one base is replaced with another, forming a stop codon so a protein does not get built completely. The incomplete protein may not function at all or only partly function. LoF variants also include insertions and deletions, in which a base is inserted or deleted, causing the wrong amino acids to form, and consequently, the wrong or nonfunctional proteins (Lyon 2012). Kabuki syndrome is a genetic disorder caused by several nonsense mutations, as shown in Figure 1 (Hannibal 2011). However, other types of mutations cause genetic disorders too. For example, a missense mutation, in which one base is replaced with another so a different amino acid is used, is the cause of Ogden syndrome (Lyon 2012). Identifying the type of mutation is crucial to understanding how the mutation occurred.
The mode of inheritance of genes is usually studied by using Mendel’s principles of dominance and recessive genes. In fact, many genetic disorders can be classified using these terms (Lyon 2012). However, not all genes have clear dominance or recessive traits; some genes are neither in complex diseases (Lyon 2012). Family histories of diseases are useful to study to understand how these diseases are inherited and where a mutation happened. Studying families is especially useful to study rare diseases or new mutations that have not been affected by natural selection yet (Lyon 2012).
To diagnose a genetic disorder, a patient must exhibit phenotypic symptoms related to the disorder. However, it is difficult to accurately know phenotypic symptoms because genes are expressed differently in individuals (Lyon 2012). Administering tests to study genomic information and standardizing vocabulary used to describe symptoms can allow doctors and physicians to compare symptoms and diagnose with more certainty. It is crucial to develop standardized medical terms for symptoms so there are lots of efforts being made, such as projects like the Unified Medical Language System (Pathak 2011). The Human Phenotype Ontology serves to standardize abnormal symptoms (Robinson 2008). Efforts for diagnosing psychiatric disorders are also being made. Research Domain Criteria proposed using neurobiological measures and observable behavior dimensions to classify psychopathology (Lyon 2012).
Causes of some genetic disorders have been identified. However, some of them may be “false positives” (Lyon 2012). Some of the causes of genetic disorders may not actually be causes although they are apparent in the genes of some patients. False positives include genes that are found in low frequencies in healthy populations (Lyon 2012). Some recorded causes of sporadic dilated cardiomyopathy were found to be false positives when the National Heart, Lung, and Blood Institute-Exome Sequencing Project calculated their allele frequencies (Norton 2012).
Researchers and doctors need to keep in mind that there some genes that seem likely to be the causes of genetic disorders may in reality not be.
Learning about genes, how they are inherited, and the symptoms that are caused by mutations are crucial in figuring out what the causes of genetic disorders are.
Works Cited
Hannibal MC, et al: Spectrum of MLL2 (ALR) mutations in 110 cases of Kabuki syndrome. Am J Med Genet A. 2011, 155A: 1511-1516.
Lyon GJ, Wang K: Identifying disease mutations in genomic medicine settings: current challenges and how to accelerate progress. Genome Medicine. 2012, 4: 58. 10.1186/gm359.
Norton N, Robertson PD, Rieder MJ, Zuchner S, Rampersaud E, Martin E, Li D, Nickerson DA, Hershberger RE: Evaluating pathogenicity of rare variants from dilated cardiomyopathy in the exome era. Circ Cardiovasc Genet. 2012, 5: 167-174. 10.1161/CIRCGENETICS.111.961805.
Pathak J, Wang J, Kashyap S, Basford M, Li R, Masys DR, Chute CG: Mapping clinical phenotype data elements to standardized metadata repositories and controlled terminologies: the eMERGE Network experience. J Am Med Inform Assoc. 2011, 18: 376-386. 10.1136/amiajnl-2010-000061.
Robinson PN, Kohler S, Bauer S, Seelow D, Horn D, Mundlos S: The Human Phenotype Ontology: a tool for annotating and analyzing human hereditary disease. Am J Hum Genet. 2008, 83: 610-615. 10.1016/j.ajhg.2008.09.017.
Worldbuilding in Fiction Literature
The central component of transmedia storytelling is worldbuilding, or the creation of a world within a given plot line. A classic example of this is in J.K. Rowling’s Harry Potter, in which a wizard world is created for protagonist Harry Potter and all subsequent characters to inhabit. In terms of transmedia storytelling, worldbuilding acts the link between the different types of media used within a single theme in order to expand that central idea. The idea is that all media used in developing either a story, brand, or product is essential to the overall experience of the user. This means that there isn’t a distinction between a “primary” and “tertiary” medium, but rather that they are regarded as a collective machine that guides the user’s interaction with said product (Kompare, 118).
Additionally, it is also important to understand that worldbuilding isn’t a strategy employed for a niche audience. That is how it differs from transmedia storytelling; the world that is created is accessible across all demographics. Both a 14-year old and a 40-year old both have the same access to world built by either the creator or the audience, or in some cases both. What matters is that the content is autonomous, meaning that there is no predisposition of a world to a certain group (Burcher, 227). While a book itself or the central theme behind a story may cater to a specific audience, the world created is part of the larger picture, one that even the creator may not have imagined from its inception.
Literature, fiction in particular, is the quintessence of worldbuilding practices. The motivation behind writing fiction is to create a world and a storyline that feeds the escapism that themselves; there is a visceral connection that established between the author, the reader, and the fictional world that entirely encapsulates the wishes and desires of the human mind while still maintaining a healthy amount of separation from the content present (Fleckenstein, 297). The strength of worldbuilding is in its ability to play off the human psyche and capitalize on that comprehension to draw in the reader/user/consumer to remain attached to the world at hand.
As wonderful and empowering it may be to the individual, worldbuilding can also have the adverse effect in terms of emulating reality. A world can be equally realistic as it is surreal, as seen with dystopian games like Angry Birds and Necromancer that parallel morose events in real life (Servitje, 85). Regardless of its positive or negative approach, worldbuilding is crucial in fiction literature as a means of drawing in the reader. The quality of a novel is often judged by the intricacy and plausibility of its world. This idea of world replacing or becoming reality is why advertisers and companies are so willing to get on board with the transmedia storytelling process.
Works Cited:
- Kompare, Derek. “Conference Report: Futures of Entertainment 3, November 21-22, 2008, Massachusetts Institute of Technology, Cambridge, MA.” Cinema Journal, vol. 49, no. 1, 2009, pp. 116–120. www.jstor.org/stable/25619748.
- Burcher, Charlotte et al. “Core Collections in Genre Studies: Fantasy Fiction 101.” Reference &Amp; User Services Quarterly, vol. 48, no. 3, 2009, pp. 226–231. www.jstor.org/stable/20865077.
- Fleckenstein, Kristie S. “Writing Bodies: Somatic Mind in Composition Studies.” College English, vol. 61, no. 3, 1999, pp. 281–306. www.jstor.org/stable/379070.
- Lorenzo Servitje. “H5N1 For Angry Birds: Plague Inc., Mobile Games, and the Biopolitics of Outbreak Narratives.” Science Fiction Studies, vol. 43, no. 1, 2016, pp. 85–103. jstor.org/stable/10.5621/sciefictstud.43.1.0085.