Quick tip: if you’re using Papers for iPad (or iPhone) and can’t get your Mac to see the device, make an ad-hoc network using the Create Network… option in the Mac’s Wi-Fi menu. Works like a charm.
Category Archives: Research
There’s a video on YouTube about thinking skeptically, the Baloney Detection Kit. It’s sponsored by the Richard Dawkins Foundation.
Many of the comments are about whether science is an alternate religion, but there’s also a theme of denying climate change. What I find notable, though not surprising, is that deniers are using the Spam button to censor the messages of people asserting climate change.
It’s worth bearing in mind the way that tools to protect collective norms (e.g. defeating spam, terrorism) can be used as tools against someone an individual simply disagrees with.
William Cohen’s blog has a post about a long fascinating thread on a political blog that found out they were being analyzed by algorithms. The community investigates and some attempt to fight back by gathering and posting personal/private information on the researchers.
I found comment #303 particularly interesting,
Tae, let that be a lesson: Blogs are not inert things that can be studied dispassionately! Sometimes they can bite back — jump right up at you through the screen.
And the meaning of sentences cannot be dissected by computer analysis.
I wonder if this will happen more as people gain awareness that they’re being analyzed.
[by way of Matthew Hurst]
I keep another blog called News Mirror where I write about what I read in the news. Today I got ticked off at bad science, maybe prompted by Saturday night entertainment at the iSLC conference this weekend.
UPDATE: An excellent tear-down in Slate: Rigging a study to make conservatives look stupid.. This is why I don’t blog much… because to say anything I can stick to takes more attention and mental energy than I have to spare for a blog. Sometimes that censor takes a break though.
A study says liberal brains “are more responsive to informational complexity.” Test: You sit in front of a computer screen and wait for a letter to appear on it. You’re supposed to tap your keyboard if it’s an M, but not if it’s a W. The experimenters mix it up but give you more M’s than W’s to see whether you get lulled into tapping when you shouldn’t. Results: 1) On M’s, liberals and conservatives responded equally well. 2) On W’s, liberals were twice as likely to be among the more accurate responders. 3) On electrical measurements of the brain area that monitors conflict “between a habitual tendency … and a more appropriate response,” liberals were five times more likely to show brain activity. Unofficial scientist/media spin: Liberals are smarter. Official scientist/media spin: Liberals are smarter, except when circumstances call for a knee-jerk ideologue. Knee-jerk liberal spin: We’re smarter because we have more agile brains. Thoughtful liberal spin: Then again, maybe we have more agile brains because we’re smarter. (Human Nature’s view: Liberals are smart, except when their knees jerk.) To tap a reply on your own keyboard, enter the Fray.
I cringe to think how this is going to play out on talk radio. I hope Liberals don’t reinforce their stereotype as supercilious and arrogant by touting this and denigrating Conservatives. It should be dealt with soberly and not as a partisan matter. (Like so many things that aren’t.)
The following is from the blog of the Long Now Foundation, a great organization of which I’m a proud charter member.
Long Now seminar speaker Alex Wright brought to all of our attention the truly visionary work of Belgian Paul Otlet and his Mundameum of 1910 (video from a documentary above, and Stewart Brand’s description from the talk below.)
The greatest unknown revolutionary was the Belgian Paul Otlet. In 1895 he set about freeing the information in books from their bindings. He built a universal decimal classification and then figured out how that organized data could be explored, via “links” and a “web.” In 1910 Otlet created a “radiated library” called the Mundameum in Brussels that managed search queries in a massive way until the Nazis destroyed the service. Alex Wright showed an astonishing video of how Otlet’s distributed telephone-plus-screen sysem worked . – Stewart Brand on Alex Wright
I just volunteered to review papers in 2008 for several divisions and SIGs of AERA, the American Educational Research Association. It’s a very long list (12 divisions, 3 committees, and 160 SIGs). Below are the SIGs that grabbed my attention.
These two blurbs appeared in the same issue of ACM Technews (March 21, 2007)…
Girls Ask Alice for Programming Skills
eWeek (03/19/07) Taft, Darryl K.
A program called Alice, originally conceived by Carnegie Mellon’s Stage 3 Research lab, has proved effective in getting young women excited about computer programming. Alice allows those who do not have high-level programming abilities to try their hand at creating 3D computer animated stories, using characters, scripting tools, and pre-existing graphic elements. Originally designed to help build virtual environments, Alice was eventually given a drag-and-drop interface, which has made it an effective tool in introducing both women and minorities to computer programming, according to CMU. A study was conducted to see what impact a version of Alice with storytelling support had on girls, compared to a version without storytelling support, and the “Results of the study suggest that girls are more motivated to learn programming using Storytelling Alice; study participants who used Storytelling Alice spent 42 percent more time programming and were more than three times as likely to sneak extra time to work on their programs as users of Generic Alice–16 percent of Generic Alice users and 51 percent of Storytelling Alice users snuck extra time,” says CMU graduate student Caitlin Kelleher, who developed Storytelling Alice. Using Alice in middle school, where many girls are found to lose interest in math and science, provides students with positive exposure to programming. The program has also been used in colleges and high schools. The program “really seems to be hitting its stride this year,” said IBM Rational division chief scientist Grady Booch, after attending the ACM Special Interest Group on Computer Science Education’s (SIGCSE) 2007 symposium in Covington, Ky. To learn about ACM’s Committee on Women and Computing, visit http://women.acm.org
Click Here to View Full Article
Now Beauty Is in the Eye of the Computer
Sydney Morning Herald (Australia) (03/18/07) Dasey, Daniel
After spending several years refining computer software designed to rate the attractiveness of women, Australian computer scientists Hatice Gunes and Massimo Piccardi at the University of Technology, Sydney, are now looking for commercial partners. The software is designed to quickly analyze a photograph of a women’s face, and immediately produce a beauty rating on the scale of 1 to 10. “Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery,” the researchers write in a paper in the International Journal of Human-Computer Studies. Piccardi is especially excited about the idea of having doctors use the facial analysis technology to ensure that modifications for plastic surgery patients improve their attractiveness. The program can predict how beautiful humans would consider a female face to be plus or minus 1.5 marks, and the researchers say the margin of error could be reduced with continued development. The beauty quotient of the software is based on 14 facial measurements, 13 related ratios, and images of supermodels, actresses, and more than 200 other women.
Click Here to View Full Article
I’ve thought a lot about the economies of online cooperation and I found this paper interesting mostly because it was written by a sociologist in 1999. I took issue with many of its assertions, but was intrigued by what the author did and didn’t consider about the technical issues and how 1999 his examples and reasoning were. In 1999, not even Nupedia (which gave way to Wikipedia) existed yet. How would he revise his analysis for Wikipedia?
The analysis of gifts and motivation suffers from a shallow understanding of the implications of atoms versus bits. Bits are not simply atoms that freely move and replicate. They do, but that makes for a new logic, one that our vocabulary and metaphors haven’t caught up with yet. “Sharing a story” and “sharing a pie” have very different implications. He identifies this as the trait of indivisibility, that “one’s persons consumption of a good does not reduce the amount available to another.” (I can’t find a definition of this term anywhere else, but I’ll go with it.) He brings this up in order to discuss the public goods, but he fails to identify an important implication: when all “goods” are the functionally equal result of activity with the information network, then the appropriate framing is not stuff but action. He analyzes Usenet posts as gifts… would you describe helping an elderly person across the street as a gift? It’s an action.
The “remarkable property of online interaction” he describes is not new in a categorical way; it just takes information exchange to an extreme. We could always share ideas indivisibly and non-excludably (e.g. language itself). As he points out, near-zero costs can change the system in non-linear ways. Today storage costs near zero. What’s “unprecedented in the history of human society” is that the records and byproducts of all our activities create persistent artifacts.
Digital goods then are not created per se, they are the by-product of activity. Activity is the appropriate frame of analysis.
I laughed when I read the rationale for why an operating system would come before a word processor. He’s right on that programmers, when not externally compensated, make the things that they want to use. And that is exactly why the first successful open-source project was Emacs, a programmer’s text editor. In fact, Emacs was the motivation for the GPL license in the first place. Developers had been building free GNU software for years before Linus learned to program. Linux was a new kernel on which all that software could run. Seeing the Linux kernel as the driving force is an easy mistake to make if you’re oriented to goods, because it’s a pretty valuable good. But the volunteer army wanted to do and first made the tools for them to do. Eventually an ideological element emerged, contending that all software should be free because it can be, but most contributors just make what works for them. The magic is that they can share the byproducts of their activity at a trivial cost.
The motivations offered, reciprocity and reputation, seem pretty similar to me, as does “altruism”. It’s all a continuum of scope. As Kollock points out, reciprocity is not expected from the party helped but from the group as a whole. This is contingent on reputation. “Reputation” in an online community can yield other rewards. “Self-image as an efficacious person” is an endogenous reward, but there are other rewards that are exogenous, such as likelihood to be hired. Altruism is a further abstraction, the belief that being good pays off somehow. The larger the scope, the less direct the link between action and reward.
All said, I think the paper does a good job at illustrating “the economies of online cooperation” as it purports to. I focused on my misgivings because I thought they’d make a more interesting post.
This study controlled for loudness and ringtones, but commuters still found mobile phones more annoying. I suspect it’s partly due to prejudice, but an interesting hypothesis is raised:
Unfortunately, Monk and his colleagues don’t provide the final answer; more research is called for. But the problem seems to be that people pay more attention when they hear only half a conversation. It’s apparently easier to tune out the continuous drone of a complete conversation, in which two people take turns speaking, than it is to ignore a person speaking and falling silent in turns.