Sam Han | Eportfolio

Main menu:

Site search

Categories

July 2024
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031  

Tags

Washington Mayor Fenty Is Underdog in Primary

Not sure why The Times is making it out that Fenty’s decline in Black support has something to do with his hiring of Michelle Rhee. A bit of a category mistake in my view.

Posted via email from sam han’s posterous

Washington Mayor Fenty Is Underdog in Primary

Not sure why the times is making it out that fenty’s decline in Black support. A bit of a category mistake in my view.

Posted via email from sam han’s posterous

How black people use Twitter. – By Farhad Manjoo – Slate Magazine

Illustration by Alex Eben Meyer. Click image to expand.

As far as I can tell, the Twitter hashtag #wordsthatleadtotrouble got started at about 11 a.m. Pacific Time on Sunday morning, when a user named Kookeyy posted this short message: “#wordsthatleadtotrouble ‘Don’t Worry I gotchu.” A couple minutes later, Kookeyy posted another take on the same theme: “#wordsthatleadtotrouble – I Love Yuh *kiss teeth*.” On Twitter, people append hashtags to categorize their messages—the tags make it easier to search for posts on a certain topic, and they can sometimes lead to worldwide call-and-response conversations in which people compete to outdo one another with ever more hilarious, bizarre, or profane posts. A woman in South Africa named Tigress_Lee moved the chatter in that direction: “#wordsthatleadtotrouble ‘the condom broke’!” she wrote. From there, the meme took off. “We need to talk #wordsthatleadtotrouble,” declared BigJamaal (11,920 followers), and then he proceeded to post a blizzard of suggestions, including “#wordsthatleadtotrouble I dont know why you got that Magnum in your wallet you clearly live a Durex lifestyle.”

Over the next few hours, thousands of people added to the meme. According to Trendtistic, a site that monitors and archives hot Twitter topics, #wordsthatleadtotrouble was one of Twitter’s top 20 hashtags on Sunday, and it was the top tag that was not based on some real-life event (like the Teen Choice Awards or football). By Monday morning, Twitter was displaying #wordsthatleadtotrouble on its list of “trending topics.” If you’d clicked on the tag, you would have noticed that contributions to the meme ranged from the completely banal (“#wordsthatleadtotrouble we just going out with friends!”) to the slightly less so (“#wordsthatleadtotrouble I didn’t know she was your sister”). If you clicked when the meme was at its peak—that is, before it spread widely beyond the cluster of people who started it—you would have also noticed something else: To judge from their Twitter avatars, nearly everyone participating in #wordsthatleadtotrouble was black.

Call #wordsthatleadtotrouble a “blacktag”—a trending topic initiated by a young African-American woman in Hollywood, pushed to a wider audience by a black woman in South Africa, and then pushed over the top by thousands of contributions from users who appear to be black teenagers all over the United States. This story is not at all out of the ordinary on Twitter. #wordsthatleadtotrouble was one of a few such tags that hit the trending topics list on Monday—others included #ilaugheverytime and #annoyingquestion—and it’s typical of the sort of tag that pops up almost daily. (A new one, #wheniwaslittle, hit Tuesday morning.)

Advertisement

clear pixel

The prevalence of these tags has long puzzled nonblack observers and sparked lots of sometimes uncomfortable questions about “how black people use Twitter.” As the Awl’s Choire Sicha wrote last fall, “At the risk of getting randomly harshed on by the Internet, I cannot keep quiet about my obsession with Late Night Black People Twitter, an obsession I know some of you other white people share, because it is awesome.”

As a nonwhite person, I must concur: It is awesome—although I’m less interested in the content of these tags than in the fact that they keep getting so popular. What explains the rise of tags like #wordsthatleadtotrouble? Are black people participating in these types of conversations more often than nonblacks? Are other identifiable groups starting similar kinds of hashtags, but it’s only those initiated by African-Americans that are hitting the trending topics list? If that’s true, what is it about the way black people use Twitter that makes their conversations so popular? Then there’s the apparent segregation in these tags. While you begin to see some nonblack faces after a trending topic hits Twitter’s home page, the early participants in these tags are almost all black. Does this suggest a break between blacks and nonblacks on Twitter—that real-life segregation is being mirrored online?

After watching several of these hashtags from start to finish and talking to a few researchers who’ve studied trends on Twitter, I’ve got some potential answers to these questions. Black people—specifically, young black people—do seem to use Twitter differently from everyone else on the service. They form tighter clusters on the network—they follow one another more readily, they retweet each other more often, and more of their posts are @-replies—posts directed at other users. It’s this behavior, intentional or not, that gives black people—and in particular, black teenagers—the means to dominate the conversation on Twitter.

There are loads of caveats to this analysis, which I’ll get to in a moment. But first, a digression into one of the leading explanations for these memes—the theory that the hashtags are sparked by something particular to black culture. “There’s a long oral dissing tradition in black communities,” says Baratunde Thurston, the Web editor of the Onion, whose funny presentation at this year’s South by Southwest conference, “How To Be Black Online,” argued that blacktags were a new take on the Dozens. “Twitter works very naturally with that call-and-response tradition—it’s so short, so economical, and you get an instant signal validating the quality of your contribution.” (If people like what you say, they retweet it.)

To me, the Dozens theory is compelling but not airtight. For one thing, a lot of these tags don’t really fit the format of the Dozens—they don’t feature people one-upping one another with witty insults. Instead, the ones that seem to hit big are those that comment on race, love, sex, and stereotypes about black culture. Many read like Jeff Foxworthy’s “You might be a redneck …” routine applied to black people—for instance, last December’s #ifsantawasblack (among the tamer contributions: “#ifsantawasblack he wouldnt say ho ho ho, he would say yo yo yo”) or July’s #ghettobabynames (e.g., “#ghettobabynames Weavequisha.”) The bigger reason why the Dozens theory isn’t a silver bullet is that a lot of people of all races insult one another online generally, and on Twitter specifically. We don’t usually see those trends hit the top spot. Why do only black people’s tweets get popular?

Page: 1 | 2SINGLE PAGE

Like This Story

Farhad Manjoo is Slate‘s technology columnist and the author of True Enough: Learning To Live in a Post-Fact Society. You can e-mail him at “)farhad.manjoo@slate.com‘); and follow him on Twitter.

Illustration by Alex Eben Meyer.

Commenting Lobby
Brought to you by

Nice round-up.

Posted via email from sam han’s posterous

The Artificial Ape: How Technology Changed the Course of Human Evolution by Timothy Taylor | Book review | Books | The Guardian

There has been a rash of books on human evolution in recent years, claiming that it was driven by art (Denis Dutton: The Art Instinct), cooking (Richard Wrangham: Catching Fire), sexual selection (Geoffrey Miller: The Mating Mind). Now, Timothy Taylor, reader in archaeology at the University of Bradford, makes a claim for technology in general and, in particular, the invention of the baby sling – not, as you may have thought, in the 1960s but more than 2m years ago.

All these theories and speculations are in truth complementary facets of an emerging Grand Universal Theory of Human Origins. The way they overlap, reinforce one another and suggest new leads is too striking to miss. What they have in common is a reversal of the received idea of evolution through natural selection. In this, a mutation takes place that happens to be useful; it is retained and spreads through the population. In the new theory, proto-human beings, through innovative technologies, created the conditions that led to a rapid spread of new mutations. In other words, we didn’t evolve a big brain (three to four times the size of a chimp’s) and then use it to develop human culture; we first departed from genetically fixed behaviour patterns, and this led to ever-increasing brain capacity and hence more innovations. The plethora of speculations as to how this happened is fascinating and will probably lead to a true understanding of the course of human evolution, but most people will want proof.

Impeccably detailed evidence is now emerging from the genomics revolution. Taylor cites one of the best attested examples of a human cultural innovation leading to genetic change: the drinking of cow’s milk. In the ancestral human condition only babies up to the age of weaning could digest milk, but tolerance to cow’s milk has spread though all populations that have practised cattle farming. Globally, this process is still incomplete and genomics has revealed that milk tolerance has evolved on several separate occasions by different genetic mechanisms.

After the switch to an upright posture, probably the biggest single anatomical change on the journey from apes to humans was the weakening of the jaw. In apes, the jaw is large and protrudes way beyond the nose. It is attached by muscle to a bony ridge on the top of the skull and has a force many times that of a human jaw. Recent genomics research has shown that a large mutation about 2.4m years ago disabled the key muscle protein in human jaws. We still have the disabled protein today, and that weakened jaw enabled a raft of innovations. The ape brain could not grow because of the huge muscle load anchored to the skull’s crest, and apes cannot articulate speech-like sounds because of the clumsy force of their jaws. This mutation allowed the increase in human brain size and the acquisition of language.

But why did it happen? Wrangham maintains that it was cooking that led to the change. Cooked food does not need strong jaws. In genetics a function that becomes redundant always leads to the gene being disabled by mutations. Around 2.4m years ago an ape switched to mostly cooked food. In the fossil record, a new proto-human appeared 1.8-1.9m years ago: Homo erectus had a much larger brain and no crest on the skull, indicating that the weakened jaw muscle was now standard.

There were other advantages to cooked food. It seems that in all animals the gut and the brain compete for energy: creatures with large guts spend many hours a day eating and have small brains. Humans have a gut only 60% as big as you’d expect for their body size: cooked food made that possible, and the energy saved went into feeding that enormous brain.

Taylor endorses Wrangham’s hypothesis but believes it is not enough. Not only is our brain very large, it is proportionately enormous at birth, creating problems at delivery for narrow-hipped, upright-standing women and even more during the first few years, when babies are extremely vulnerable. Factor in the African savannah 2m years ago, teeming with enormous predators, and you wonder how we are still here. For Taylor, the crucial innovation was the baby sling, which enabled proto-human mothers to carry their vulnerable babies (infant apes, of course, cling to their hairy mothers’ backs).

Unlike milk tolerance, jaw muscles and gut length – all amenable to genetic investigation in the present – prehistoric baby slings have left no evidence behind, so this hypothesis is likely to remain speculative. For the lack of any clinching evidence, Taylor allows himself to be side-tracked in the second half of the book into Barthesian digressions on the role of the object in human cultures. Some of this material is far-fetched, reaching its nadir in the suggestion that in the mirrors given to them by French sailors in 1772, the doomed Tasmanian Aborigines saw “some premonition of the coming global age of screen culture”.

This loss of focus is a pity because Taylor, along with the other writers mentioned, is clearly on to something. The new understanding of human evolution should be a massive relief to many. The anguish that Darwin caused – all purpose gone, chance and brute necessity rule – seems to be have been misplaced. There is no goal in nature, nor any God-given purpose, but human evolution has been driven by striving towards a better way of living. As they domesticated cattle, sheep, goats, pigs, horses, cats, dogs and bees, humans were simultaneously domesticating themselves. By our own efforts we made ourselves human.

Peter Forbes’s Dazzled and Deceived: Mimicry and Camouflage is published by Yale.

How is this any different from Bernard Stiegler and Andre Leroi-Gourhan’s arguments?

Posted via email from sam han’s posterous

The Skinny on Google Instant | The Atlantic Wire

So this is what the exploding logo was all about. Today, one of the world’s fastest search engines just got a little faster. Google unveiled its newest update to Web search, Google Instant. Now, when users begin typing a keyword search, the results instantly appear below. Marissa Mayer, the company’s VP of search products, says the new update is all about saving users’ time. “Our testing has shown that Google Instant saves the average searcher two to five seconds per search,” Mayer says. “That may not seem like a lot at first, but it adds up. With Google Instant, we estimate that we’ll save our users 11 hours with each passing second!” Reactions from around the Web:

  • This Will Save the World a Lot of Time, writes Charles Arthur at The Guardian: “Marissa Mayer, the company’s vice president of search and user experience, said that until now, each search typically lasts 25 seconds – 9 seconds of typing, 1 second in which the query reaches Google, is processed and sent back, and 15 seconds during which the user considers which search result to click on. But with Google Instant the average search will be shortened by two to five seconds per query – which, given the billions of people who use the service every week, would mean 11 hours of searching saved every second.”

  • It’s Lightning Fast, observes Claire Cain Miller at The New York Times: “Start typing ‘San Francisco’ in the search box, and by the time you get through ‘San,’ you will already see a map of the city, a collection of photos of landmarks and links to Alcatraz and the San Francisco visitors’ center. Or, without typing anything more, select Google’s next guess, Santa Cruz, and see a map, photos and links for that city. Type ‘the gi’ and the search box shows ‘the girl with the dragon tattoo,’ as if it is reading your mind.”
  • It Won’t Slow You Down, notes Jared Newman at PC World: “Google claims that Instant won’t considerably slow down Internet connections, because the amount of data delivered for search terms is relatively small, and because the system only sends parts of the page that change when more typing alters a search result. For connections that are already slow, Google Instant automatically turns off, and users can also shut off the service through their user preferences or by clicking the drop down box to the right of the search bar.”
  • Could Change the Face of SEO, writes Charles Arthur at The Guardian: “The impact could be dramatic on another group who have previously relied heavily on Google’s old search results page. “Search engine optimisation” (SEO) experts have built a gigantic business from analysing what results appear for a particular set query, especially to Google. However the new system, with its live updates of queries, means that it will be more difficult for SEO analysts to work out which results will do well from which query, because the results will keep changing as the user types. It will also be harder to examine the results mechanically.”
  • I Have No Idea What Google Does Anymore, says a resigned Alex Balk: “Google lost me around Wave (remember that?) and ever since then I’ve just been in a fog of confusion and indifference. I can make cellphone calls from my Gmail inbox? Okay! There’s a new priority system that tells me what mail I need to read before I’ve opened it? Sure! Whatever you say, Google! When we are all toiling in the mines as part of the Google Serf program I’m sure we will still appreciate these few years of technological innovations. If only I understood what the hell they were.”

And the Atlantic Wire has a round-up of responses…

Posted via email from sam han’s posterous

‘Google Instant’ Search Feature Unveiled At Google Press Conference (VIDEO)

A lot of people are talking about it…

I’m kind of interested in it for theological consequences…

Posted via email from sam han’s posterous

Ambohimirary Journal – In Madagascar, the Living Dance With the Dead

Supremely interesting.

Posted via email from sam han’s posterous

MIT TechTV – Scratch@MIT Friday Keynote: Rethinking Identity, Rethinking Participation

Via Henry Jenkins (@henryjenkins)

Posted via email from sam han’s posterous

Print’s History Beyond Books

I’ve done a specific study of the Low Countries, and there, something like 40 percent of all the books published before 1600 would have taken less than two days to print. That’s a phenomenal market, and it’s a very productive one for the printers. These are the sort of books they want to produce, tiny books. Very often they’re not even trying to sell them retail. They’re a commissioned book for a particular customer, who might be the town council or a local church, and they get paid for the whole edition. And those are the people who tended to stay in business in the first age of print.

Earliest printed materials did not resemble books in the least. (This article is somewhat unremarkable if you are familiar with the historian Roger Chartier’s work.)

Posted via email from sam han’s posterous

Op-Ed Contributor – Google’s Earth

“I ACTUALLY think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot.

We have yet to take Google’s measure. We’ve seen nothing like it before, and we already perceive much of our world through it. We would all very much like to be sagely and reliably advised by our own private genie; we would like the genie to make the world more transparent, more easily navigable. Google does that for us: it makes everything in the world accessible to everyone, and everyone accessible to the world. But we see everyone looking in, and blame Google.

Google is not ours. Which feels confusing, because we are its unpaid content-providers, in one way or another. We generate product for Google, our every search a minuscule contribution. Google is made of us, a sort of coral reef of human minds and their products. And still we balk at Mr. Schmidt’s claim that we want Google to tell us what to do next. Is he saying that when we search for dinner recommendations, Google might recommend a movie instead? If our genie recommended the movie, I imagine we’d go, intrigued. If Google did that, I imagine, we’d bridle, then begin our next search.

We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies. We also seldom imagined (in spite of ample evidence) that emergent technologies would leave legislation in the dust, yet they do. In a world characterized by technologically driven change, we necessarily legislate after the fact, perpetually scrambling to catch up, while the core architectures of the future, increasingly, are erected by entities like Google.

Cyberspace, not so long ago, was a specific elsewhere, one we visited periodically, peering into it from the familiar physical world. Now cyberspace has everted. Turned itself inside out. Colonized the physical. Making Google a central and evolving structural unit not only of the architecture of cyberspace, but of the world. This is the sort of thing that empires and nation-states did, before. But empires and nation-states weren’t organs of global human perception. They had their many eyes, certainly, but they didn’t constitute a single multiplex eye for the entire human species.

Jeremy Bentham’s Panopticon prison design is a perennial metaphor in discussions of digital surveillance and data mining, but it doesn’t really suit an entity like Google. Bentham’s all-seeing eye looks down from a central viewpoint, the gaze of a Victorian warder. In Google, we are at once the surveilled and the individual retinal cells of the surveillant, however many millions of us, constantly if unconsciously participatory. We are part of a post-geographical, post-national super-state, one that handily says no to China. Or yes, depending on profit considerations and strategy. But we do not participate in Google on that level. We’re citizens, but without rights.

Much of the discussion of Mr. Schmidt’s interview centered on another comment: his suggestion that young people who catastrophically expose their private lives via social networking sites might need to be granted a name change and a fresh identity as adults. This, interestingly, is a matter of Google letting societal chips fall where they may, to be tidied by lawmakers and legislation as best they can, while the erection of new world architecture continues apace.

If Google were sufficiently concerned about this, perhaps the company should issue children with free “training wheels” identities at birth, terminating at the age of majority. One could then either opt to connect one’s adult identity to one’s childhood identity, or not. Childhoodlessness, being obviously suspect on a résumé, would give birth to an industry providing faux adolescences, expensively retro-inserted, the creation of which would gainfully employ a great many writers of fiction. So there would be a silver lining of sorts.

To be sure, I don’t find this a very realistic idea, however much the prospect of millions of people living out their lives in individual witness protection programs, prisoners of their own youthful folly, appeals to my novelistic Kafka glands. Nor do I take much comfort in the thought that Google itself would have to be trusted never to link one’s sober adulthood to one’s wild youth, which surely the search engine, wielding as yet unimagined tools of transparency, eventually could and would do.

I imagine that those who are indiscreet on the Web will continue to have to make the best of it, while sharper cookies, pocketing nyms and proxy cascades (as sharper cookies already do), slouch toward an ever more Googleable future, one in which Google, to some even greater extent than it does now, helps us decide what we’ll do next.

William Gibson is the author of the forthcoming novel “Zero History.”

The great William Gibson on Google.

Posted via email from sam han’s posterous