About Sean D. Kelly
Sean Dorrance Kelly is the Teresa G. and Ferdinand F. Martignetti Professor of Philosophy at Harvard University. He is also Faculty Dean at Dunster House, one of the twelve undergraduate Houses at Harvard. He served for six years as chair of Harvard's Department of Philosophy.
Kelly earned an Sc.B. in Mathematics and Computer Science and an M.S. in Cognitive and Linguistic Sciences from Brown University in 1989. After three years as a Ph.D. student in Logic and Methodology of Science, he received his Ph.D. in Philosophy from the University of California at Berkeley in 1998.
Before arriving at Harvard in 2006, Kelly taught at Stanford and Princeton, and he was a Visiting Professor at the Ecole Normale Supérieure in Paris.
Sean Kelly's work focuses on various aspects of the philosophical, phenomenological, and cognitive neuroscientific nature of human experience. He is a world authority on 20th century European Philosophy, specializing in the work of Martin Heidegger and Maurice Merleau-Ponty. He has also done influential work in philosophy of mind and philosophy of perception.
Kelly has published articles in numerous journals and anthologies and he has received fellowships or awards from the Guggenheim Foundation, the NEH, the NSF and the James S. McDonnell Foundation, among others.
Fun fact: He appeared on The Colbert Show in 2011 to talk about All Things Shining.
Sean Kelly lives at Dunster House with his wife, the Harvard Philosopher Cheryl Kelly Chen, and their two boys, Benjamin and Nathaniel.
this was excellent meeting of minds, talking in terms of us being critters for whom things matter, and this not being the result of a process of following rules/procedures (being programmed/wired) but a whole other kind of relational ars /expertise was both clear and inspiring, joining your voices with an important greek choir reminding us that we are not machines.
http://pubpages.unh.edu/~jds/SanDiego.htm
AI is such an interesting way to explore the peculiarity of being human and approach the question of meaning/relevance.
I just received the fall 2010 Harvard Review of Philosophy and there is a topical piece on the strangeness of being human, where Franklin Perkins contrasts fish/men and European/East Asian Philosophy. Placing us in nature mitigates some classical problems of explaining our position in it: mind-body dualism, free will, etc.. We can swap eternal truths, universals and souls for human experience: emotion, human fragility and family:
“..emptying the human of any fixed content – this “fasting of the heart” – does not lead to exiting the concrete world for some mystical unity with a transcendent heaven. Rather, awareness of nature/heaven provides a pivot within our singular experience, allowing shifts in how we label the concrete world around us. Ultimately, this flexibility allows us to affirm nature, not as the abstract oneness of things but in its singularity in any moment”.
I wonder whether his thoughtful comparison to other animals, and finding exceptionalism in accepting our insignificance, provides any further insights into existential achievement vs. computing. dmf, how qualitatively distinct is the “human-being dwelling poetically as accidental/contingent”? In Wizard of Oz style, can we approach that excess if we give a machine a body or an animal a more advanced brain?
that’s not easy to answer in that we are ‘just’ animals with more advanced (but still kluged) brains but our whole beings are expressive of our cares, we are in this sense homo rhetoricus, so maybe, maybe not.
Click to access cradle.pdf
in another incarnation I wrote that Rorty (and others) were correct ( pragmatically speaking) that descriptions, like novels, gave us better models of talking about ethics than analytic/quasi-scientific quests for certainty/foundations did.
But that as St.Fish had pointed out in Doing What Comes Naturally and elsewhere that there are serious limits to the therapeutic value/power of critical-thinking/theory-hope/self-knowledge in relation to reader responses which do not so much have ideas/habits as they are composed of/by them, such that significant changes in our outlooks/pre-judices are more like matters of conversion. To which I would add that one can no more follow a novel than a formal list of rules, so my point about seeing through characters vs imitating them.
This being the case I thought (and still do) that Bert’s nonconceptual coping/expertise might offer us a clue to the kind of therapy that reached us at this level of depth-psychology (where we live/dwell so to speak), my only recent addition would be to build off of Sean’s excellent work on visual perception to include Wittgensteinian methods relating to seeing-as/aspect dawning.
The last line of the NYT Opinion piece really nails it. It strikes me that many of my technologically-oriented peers just have this fundamentally silly idea that a technological achievement (as narrow and as significant as Watson) is a quick trip to the full capability that they haven’t even begun to grasp. I think its a manifestation of Borgmann’s “availability” ideology. It strikes me as pervasive in our culture (but maybe its just pervasive among the type of people I associate with) and our full human capabilities are grossly under-appreciated.
I appreciate the issues even though I don’t have a full grasp of them. Still, I was surprised at the novelty and insight of Dreyfus’ Barwise Prize lecture. I recommend it to those who find today’s NYT Opinion piece intriguing.
my related worry is Eric Schlosser’s (Fast Food Nation guy) specter of the McJob, that just as we have applied factory standards to agriculture and restaurants the managerial desire for conformity/uniformity/speedy/easy-delivery will continue to prefer close-enough approximations of performance such that people not be given expertise/skills (the “thinking” will be done by machines) that would make them hard to replace (and some sense of accomplishment/worth) but rather will be given behavioral training to respond to programmed cues like chimps in a lab. People as disposable resources. This isn’t soylent green future fear mongering this is happening now. Even in the field of psychotherapy, now behavioral management, things are quickly being reduced to people/clinicians being trained to follow manuals, which teach them how to train employees/clients to be work ready. In NY state there is a computer program that state psychiatrists’ prescriptions are measured against and flagged for falling outside of the programmed parameters, etc.
http://www.mit.edu/~sturkle/
De-skilling is an old story: productivity improvement. I did a study a few years back on the potential rate of return to investments in intelligent machine technology. Its expected to be quite high. I think the Dreyfus/Kelly argument about Watson suggests just how far productivity enhancement can go given the technology frontier and the frontier is moving.
I think the HD’s Internet book indicates how far it may go as the technology frontier moves out. HD doesn’t dismiss the possibility of cameras being able to detect subtle bodily movements (indications of intention as I recall) that would allow computer interaction (I seem to recall the discussion was about Second Life and other net communities) with humans as if the computers had bodies (to some extent).
One of the concerns often expressed is the deskilling of production jobs. That is just the tip of the iceberg. My observation is that deskilling has just started to replace white collar jobs but when it does, it’ll do so with a vengeance especially until we figure out what qualities are being displaced. The problem often is, “you don’t know what you’ve got ’til its gone,” to quote Joni Mitchell. The Watson discussion points here. Dreyfus’s On the Internet addresses deskilling of teaching (or at least the lowest common denominator form of it — information exchange). Some financial services have been replaced by tele-bots: if you make an internal corporate loan transaction, you might never talk to a human. You get a series of questions and direction to make various selections from a web site.
I can across a (now dated) discussion to a fast-food manager robot (Hyperactive Bob) that I thought was telling: Bill Christensen, “It Has Come to This: Computer Orders Restaurant Workers Around,” Science Fiction
in the News, June 19, 2006.
Fast-food managers today….
Middle-managers Unite!
“Middle-managers Unite!”
now you’re talking my language, not only the loss/stripping of skill but with it expertise and related aspects of quality.
has anyone here read Donald Schon on Educating the Reflective Practioner?
http://www.infed.org/thinkers/et-schon.htm
Great article! A few questions:
–In what sense are the GOFAI and Watson ‘paradigms’ fundamentally different? In both cases there is extensive reliance on brute computing power as well as on “rules” in the sense of computable algorithms. The approach to the algorithms may be different, but that may be a function of the growth of AI rather than some sort of conceptual revolution. In fact, I think this is how Daniel Dennett might describe it. I’ve heard him say in lectures that we should expect AI to take more than a few decades to even begin to approach human intellectual capacities and that it develops through trial and error approaches to tackling these problems. Further, I would wager that two people as far apart as Dennett and Fodor would agree that the success of Watson provides at least some indirect support for the computational theory of mind. Given that, why call the beginnings of AI and Watson fundamentally different paradigms?
–Given that we humans make ‘howler’ mistakes, including ones concerning relevance, especially, though not exclusively, when learning new skills, why think that Watson’s ‘howler’ is the key to the difference between us and it?
–There seems to be two different types of comparisons made between humans and computers like Watson. One compares how humans and Watson play jeopardy as a matter of course; the other compares Watson to ‘humans at their best’. In the first case, you don’t have to be an expert jeopardy player to understand Watson’s howling error. This seems to be explained in terms of a general human capacity to make judgments about relevance. I’m less clear about the contrast with humans at their best. It appears to have something to do with pursuits that don’t have narrow criteria for success. Are we at our best when pursuing activities with non-narrow criteria of success? How is this related to our nature as beings to whom things matter?
E.L. I was paying attention to intelligent machine systems a few years back and came across Michael Wheeler’s Reconstructing the Cognitive World (2005) that I think starts to lay out the differences your asking for. I wish my brain was better and I could recall the main points. But I do recall that he clearly distinguishes what he considers his Heideggerian approach to cognitive science and the computational approaches. At the time (2005/2006), I was told that it was the thing to be reading.
Still, if you look at it, by all means follow up that reading with Bert’s Barwise lecture because it took a turn that was for me quite surprising (just showing how behind the curve I am). I was all proud of myself because I could follow Wheeler’s “Heideggeran” approach (it too a long time to get there!). But then in Barwise, Bert puts forward the theory — based on Walter Freeman and Merleau-Ponty — that distinguishes his Heideggerian approach (and I assume Sean’s collaboratively) from Wheeler’s. In a nutshell he argues that to replicate human powers requires a human body. Its the only way to solve fundamental problems (like the “frame problem” I recall) that beset the classical (computational) approach as well as Wheeler’s and Brooks’.
No doubt Sean and others will chime in with clearer, more up-to-date, material. But just in case others are busy, and your anxious, I’ll bet it wouldn’t be a waste of your time to read Wheeler and the Barwise paper (and maybe Walter Freeman’s short “How Brains Make Up Their Minds.”
Hi D.L., I understand how what people like Brooks, Wheeler, Dreyfus, et al propose is very different from GOFAI. My question is how Watson is fundamentally different from GOFAI, which is one of the claims. They also distinguish Watson from the tradition started by Brooks.
It’s hard to know exactly what Watson is doing, since as far as I know the IBM folks haven’t published anything about it yet. But their informal descriptions emphasize the importance of statistical machine learning techniques. These are techniques that allow the program continuously to update the way it deals with new input based on feedback it gets about how recent strategies have faired. This machine learning component is supposed to distinguish Watson (and lots of other contemporary AI research) from GOFAI. Watson is distinguished from Brooks’s embodied approach in the more obvious sense that it doesn’t have a body.
Sean
you folks may find this bit of Rorty (paralleling Heidegger and via a Kuhnian expansion of Davidson) offering an alternative to the mathematization of the world, and to embracing the pivotal/revitalizing role of the ir-rational and the poetic in personal and world history to be of interest:
http://books.google.com/books?id=UCwX_UIu9nEC&pg=PA14&lpg=PA14&dq=rorty+davidson+metaphor&source=bl&ots=FB-9CoriiH&sig=YCyQz3_gb85ydlmGbLiEP87-ioE&hl=en&ei=z0FuTfXUOIe5tgfIquCHDw&sa=X&oi=book_result&ct=result&resnum=6&ved=0CEMQ6AEwBQ#v=onepage&q&f=false
Bert’s 1967 article has been incredibly incisive and prescient, and I’m not sure all the significant inferences have been drawn from it with equal force.
Let me try to bring out the ones that have been less noted by distinguishing, for purposes of exposition only, between the receptive and the productive side of the human body. Because the body is receptive to the world in so many and intricate ways, that receptivity can never be mimicked, it could only be duplicated. Hence a computer will always be relatively clueless when it comes to being in and talking about the world, and therefore it will never be able to converse about the world like a person and pass the unrestricted Turing test.
The productive side comes to light if, counterfactually, you assume a computer will pass that test. We would eventually find conversing with such a computer disappointing because the computer would have no standpoint or a faked standpoint. (There is of course the issue of human complicity in artificial fakery.) To have a standpoint is to have a human body, to have parents, to be gendered, to have been injured, to have been healed, to have been hungry, to have been blessed, getting older and in time old.
By being productive then I mean the way one human being addresses and engages another. So when Bert talked about “to be intelligent,” what’s at issue, I believe, is not just cognitive skill but also the insightful richness that we encounter in a person.
this and the correct answer to the question who is the leader of Libya is Gaddafi, but of course that is so wrong on a human level…
AB. I don’t understand “duplicated.” Do you mean by another body?
Projecting out, it seems that lots technological possibilities could exist to grow or raise very close body-substitutes.
Given that, I wondered for a while, inconclusively, about an “in principle” argument for something uniquely us (at our best), unduplicative. While I am far from a conclusion, my candidates are the phenomena of “conversion” (the less positive-sounding “anxiety”) and what I understand to be our historicality (which I crudely think of as our ability to traverse multiple, competing understandings of being, and to watch & wait for other “gatherings” — to turn). I suspect that this historicality and the phenomenon of conversion are linked.
Whenever I turn my thoughts to this, I think duplication is in the cards eventually. But surely we have a very very long way to go.
Relative to the beauty of receptivity, we tread treacherously close to qualia. It would not matter even if we all saw a different shade of red. Even if computers could mimick our receptivity they still couldn’t give a damn. They also have the opposite problem in that they can’t share. Experiencing the richness of a human being is the interaction of receptivities. Just as there’s no private language, there’s no relevance to the abstraction of an individual consciousness. (I’m sure the Quixotic attempt to produce one will have both grand and pernicious consequences). The sharing/shining they can’t compute is best illustrated by the aesthetic experience. BTW, there is another interesting article in the Harvard Review on aesthetic experience being an opportunity for moral growth through the transformation of our capacity to take pleasure in new ways. Maybe not moral per se but perhaps ethically significant as learning…sharing?
Dmf, re Libya, I was shocked that I only got 3 right in this quiz
http://www.guardian.co.uk/world/quiz/2011/mar/01/muammar-gaddafi-charlie-sheen-quiz
yes sir the flattening of all things to gossip (and vicarious/virtual experience) is insidious, wide and deep, ruling most aspects of our daily lives (it has its evolutionary role in pack/pair-bonding).
not to (just) repeat myself but aspect-dawning/seeing-as has the aesthetic and the ethical, is receptive and productive, and doesn’t just compute information/data. We should avoid the Beauty=True=Good aspects of Greek thought even if they ‘work’ in math and physics, but certainly ethical development is like learning-how to have an ‘ear’, ‘nose’, or an ‘eye’ for (an appreciation of) pace my Buddhist compadres. I would say the seeing of a spark of interest (possibilities) in a child’s eye (or the deathly lack of that comes with the failure to thrive) may be a ‘better’ example than the merely aesthetics of perception/art. The sharing (not con-fusing with) part is key and so is getting out of the conservative focus on the family and joining the wider world.
“Pasteur and Pouchet disagree about the interpretation of facts because, so the historians say, those facts are underdetermined and cannot, contrary to the claims of empiricists, force rational minds into assent. So the first task of social historians and social constructivists, following Hume’s line of attack, was to show that we, the humans, faced with dramatically underdetermined matters of fact, have to enroll other resources to reach consensus– our theories, our prejudices, our professional or political loyalties, our bodily skills, our standardizing conventions, etc.” -bruno latour
char, how do we get people not just to taste more/better but to get into the mind-set/life-world of the chef?
http://www.npr.org/2011/03/03/134195812/grant-achatz-the-chef-who-lost-his-sense-of-taste
What a great story. With fulfillment at the heart of these ATS threads it’s inspirational to hear about such authenticity. To get so much out of your work/passion is a true gift. He talks about slowing down the eating process and of course Sean discusses family meals. Ostensibly so mundane, but a great respite from the grind and dedicated time for conversation. My wife and 3 daughters can sit for hours at the dinner table. As he says, you check out of the eating and create a new dimension – for a short time everyone is on the same wave length.
http://www.theatlantic.com/technology/archive/2011/03/googles-new-algorithm-incorporates-human-feedback-about-quality-sites/72006/
Google could build a jocular artificial mind. Bake in some uncertainty. A hip Watson.
machinic contingency, irony, and solidarity?
can a machine know what it doesn’t know, or experience the un-canny?
http://www.richardsennett.com/site/SENN/Templates/Home.aspx?pageid=1
not a football fan but maybe a lawyer?
http://www.nytimes.com/2011/03/05/science/05legal.html?_r=1&hp
dmf, let’s try one extreme. Watson the conversationist can’t experience the uncanny because, as noted, it’s doing something else. But how good can it feign it? Google interactively incorporating human input into their algorithm had me wondering about whether they could focus their considerable talents on personal algorithms. What if they continuously monitored one person’s reaction to every input (written/spoken/seen), for their life for that matter. (Aren’t some people already wearing video cameras?). Imagine a robot that slowly mimicking the output of its owner. So, to your point, there is no real irony, but historically higher rankings from elicited guffaws generates pithy responses. Depending on a life’s worth of jokes, maybe the artificial raconteur’s howlers might be modest. Just think of the commercial applications dmf. We could preserve these virtual identities for an iPersons store. Cheaper than cryogenics. The vain will want to leave behind facsimiles.
so we can be alone together forever, sort of a holographic Sartrean hell?
http://itc.conversationsnetwork.org/shows/detail4803.html
while I have concerns about replacing human-beings with machines I’m more worried (tho there is some overlap here) about us thinking of people in terms of mechanics/parts/functions , as human resources to be managed.
http://www.npr.org/blogs/13.7/2010/09/29/130221453/how-to-live-forever-or-why-habits-are-a-curse
automated race to the bottom?
http://www.nytimes.com/2011/03/07/opinion/07krugman.html?src=ISMR_HP_LO_MST_FB
antebellum AI or purloined letters?
Click to access Poetics%20Today_0.pdf
Dmf, if we take the theme of the (excellent) NPR article into Whitehead’s territory and posit events/occasions as metaphysical constituents, where people emerge from the world, is it fair to say that AI prospects dim even further? In the sense that Watson is Kantian subject/mirage?
ch, yes to the dimming via Whitehead/Deleuze/emergence, to the degree that one can find a kind/suggestion of moral calculus in Kant and or a transcendent/structuralist aspect, than yes, but I would blame this more on Chomsky and Pinker and others.
http://www.npr.org/2011/03/09/133372394/trading-wall-street-for-life-in-a-monastery
Laudable, I guess. I don’t have the spirituality gene. No more complicated than elephants lingering around their dead. Or maybe a misnomer, if Spinoza’s virtue qualifies. In that vein, silence would be a living hell. Everything I do I designed to fill that void, a reflexive and calculated diversion, I think it was Woody Allen who said he doesn’t mind dying, he just doesn’t want to be there.
not a spiritual bone in my body but it reminded me of talk of St.Paul’s conversion experience here a while back and how we respond, or not, to calls of conscience, experiences of, attunements to, intensity/quality/authenticity that stand in stark contrast with what They say is important. So in some significant sense more complicated than elephant mourning just not more mysterious.
ch, check out:
http://books.google.com/books?id=IT2BRpMsiuEC&printsec=frontcover&dq=simon+critchley+wallace+stevens&source=bl&ots=hDFMv9zVWc&sig=ouWSS695NEXTpNeSDqPD1_NwNws&hl=en&ei=Z8R4TePsEIKosQPemdT5Ag&sa=X&oi=book_result&ct=result&resnum=5&ved=0CDUQ6AEwBA#v=onepage&q&f=false
Thanks for all the great links. Increasingly, I tread lightly with respect to spirituality. The conundrum, almost as always, is crystallized with Nietzsche – in this case with the call to conscience that survives the polemic against modernity/religion. It may be apologetics, but how different, really, is his call for authenticity? Certainly elements of his life had a monastic character. If imitation is the greatest form of flattery you need look no further than Zarathustra. Considering his aversion to polarity, is a diatribe the opposite of admiration? That the bells still toll on Sunday is shocking, but a foil (prerequisite?) that serves (advances?) his case. If only he had read Kierkegaard!
my pleasure thanks for the conversation, I’m opposed to the idea/possibilities/implications/authority of a spirit-realm/dimension (William James was wrong to assert that it doesn’t matter if it is God speaking to you or your sub-conscious) but come at this indirectly (my own SK tribute) by offering perspicuous alternatives. As I said somewhere Nietzsche should have (maybe did?) stuck to being (or should be read as) a depth-psychologist (see Graham Parkes). The part of Critchely/Stevens that interests me here is can we make peace with the fact that we are filling in our intuitions with as-if-fictions, and to follow my post-Wittgenstein line here that this is the means/way for our ethics/being-with?
on making a sacrificial commitment to one’s calling, resisting dissolution:
http://jackkerouacispunjabi.blogspot.com/2011/03/i-was-waiting-until-midnight.html
the eminent prof. Moyal-Sharrock gives a reading of our non-conceptual/emotional connection to characters in novels/perspicuous-presentations, and pace yours truly follows Aristotle in finding these experiences to being instructive about our being in the world with others and not merely cathartic:
Click to access 903547.pdf
what is Thinking?
http://www.nytimes.com/2011/03/13/books/review/book-review-the-social-animal-by-david-brooks.html?_r=1&adxnnl=1&adxnnlx=1299938450-JYTGJCcXW3Vz6SFky96Bqw
replicants, LSD, and the RedBook: http://www.wpr.org/book/100321a.cfm
Speaking of LSD, interesting question of whether the influence of drugs lends more or less credence to Brooks’ thesis. I think it highlights the obvious, that these neat distinctions (reason/emotion or sub/conscious) are fallacies. Reason can no more be crucified (SK) then amount to mere mathematics. If we are to regard Nietzsche as the premier psychologist his physiological determinism is but one influence. BTW, Nagel’s most recent book sounds great….I need to add that to the list
yes the old either or(s) aren’t borne out by the neurophenomenology (except in cases of brain failures/damage), yes reason shouldn’t be reduced to mechanical calculations or any other death drive that would seek to eliminate novelty/surprise/excitations, don’t forget the Nietzsche of marching metaphors and dancing Gods (who he suggests laughed themselves to death when one of them declared himself Supreme) think Freudian sublimation not Id-unbound, haven’t read Nagel since he was going batty but I’ll give it a look.
costs of mechanized desires:
http://www.wired.com/magazine/2011/02/ff_joelinchina/all/1
mind mapping?
http://www.theatlantic.com/technology/archive/2011/03/from-filing-cabinets-to-digital-thought/72490/
simon critchley on faith as calling and commitment:
http://philosophy.uchicago.edu/podcasts/elucidations.html#20
Fantastic link dmf. Thanks. I had no idea these were available through iTunes. (To-date I had only found Philosophy Bites)
could cyborg chimps type Shakespeare?
http://thedianerehmshow.org/shows/2011-03-16/miguel-nicolelis-beyond-boundaries
from an interview with Dermot Moran:
Thinking of the human being as an information-processing machine…
Absolutely, I actually was just talking about this yesterday in Paris. We were talking about Dreyfus, who has this idea of absorbed coping, this sort of expert basketball player who does not have to think when playing a game. I think there is something right about that, but the contrast that Dreyfus has is a sort of Cartesian/Husserlian picture of the mind. My argument was that Dreyfus probably has Husserl wrong here; I think Husserl is probably more on Dreyfus’ side than Dreyfus realizes. The picture that Dreyfus has of that kind of Cartesian intellectualist model of the mind is, I don’t know if you have ever seen the movie Robocop. The main character was half man half machine and had all these calculations showing on his visor. You are supposed to see that on jet fighters as well, where the information is coming up in the screen in front of them; it’s almost as if we were calculating in some kind of mathematical way all potential for action. I think that’s the model that cognitive science definitely has, in that they are trying to track all the routines and have a line of codes for everything that a human being does. While Dreyfus is right to attack that model, but I think that in between that and the sort of absorbed coping, there is an intermediate model of the embodied person who is conscious of his/her body but not extremely self-conscious unless something goes wrong. I am sitting on a chair now and I am really more thinking about talking to you than being conscious of sitting on a chair, but I can bring my attention to bear on where my feet are, and I can move my feet and adjust my position to make myself more comfortable – and I do all of that while I am doing everything else because I am embodied all the time. I think that level of embodied consciousness has to be brought to bear, and frankly I do not think Dreyfus quite gets it. There is a kind of aware body that is not the mindless robot that is in the zone, nor the calculating robot that is doing everything like Robocop. There has to be something in between, which is the human person.
I am also working on this: phenomenology was always characterized as a philosophy of consciousness, especially in Husserl. But in Ideas II, he talks about the personalistic attitude, that is, living first and foremost in a personal world with others; and “persons,” of course, means that we respect each other as sources of meaning and value that are in some respect irreplaceable. That personalistic philosophy you find it also in Wittgenstein and Scheler, and it is something I want to bring back to the phenomenological debate.
an excellent conversation on the role of desire in the logos-de-animation:
http://www.personal.psu.edu/cpl2/blogs/digitaldialogue/2011/03/digital-dialogue-45-soul-and-substance.html
http://blog.uvm.edu/aivakhiv/2011/03/21/artmonks-children-of-thoreau-whitehead/
elizabeth gilbert and her muse:
http://www.radiolab.org/2011/mar/08/me-myself-and-muse/
computed affect monitoring?
http://www.sciencefriday.com/program/archives/201103254
hey charlie, i’ve fallen into just repeating myself here so I’m moving on, thanks for the engaging exchange and don’t forget to check out:
http://www.psupress.org/books/titles/0-271-01677-9.html
http://www2.lse.ac.uk/publicEvents/events/2011/20110219t1100vSZT.aspx