Right now, in the wake of the Facebook IPO and the downward drift in its stock price since, you can hear the spluttering everywhere:

“This is bullshit!

Um, well, yeah. What did you expect? A 7-year-old company going IPO at over 100 times revenue (not earnings) per share, with controlling interest still left in the hands of a 28-year-old? Own up, people: if you were desperate to lay your hands on this stock as a “can’t lose” proposition, you were just shitting yourself. And of course you’re shitting yourself now, albeit in another way.

The fecomantic question is: what can one foretell from this shit? (Unfortunately, this divination means leaning over and looking down at it. Ew. Hand me that clothespin.)

First, some historical and personal background. This whole episode is upside-down when I compare it to the understanding I gleaned while working in Silicon Valley in the 1980s and early 90s. Then, the system (which I hope still mostly works) ran like this:

(1) Venture capital is pieced together by the VC firms from little slivers that are mostly shaved off of highly regulated institutional investment funds — the bigger ones being state government employee pension funds. These funds are forbidden by law from risking very much. But “little” is relative. These institutional funds have walloping amounts of money under management. What’s a little for them goes a long way for the rest of us.

(2) Your founding team would get options on maybe 10% of the company; maybe another 5% goes to others who follow. The rest is reserved for the open market upon IPO. As the VCs liked to put it, “Do you want 1% of a very successful company, or a controlling interest in nothing?”

(3) If you IPO (most startups don’t), you start at maybe $8-$12/share. In your wildest dreams it goes up to something like FB’s initial $38, over a period of years, after proving that there’s something to the promises and that there’s a clear path to profit after only a few years.

(4) During those years, you wear “golden handcuffs” — that is, your stock options don’t fully vest for a while. To leave the company earlier is to abandon any possible winnings from as-yet-unvested options.

(5) These are options, not actual stock; they are worth only the value of the stock (at some point in time after vesting) minus your exercise price.

(6) Your technology is something truly new and clever that no competitor can properly replicate for a while, except perhaps by licensing your patents or by inventing improvements that work around your patents. Very likely, much or most of it spent its embryonic phase in a government-funded research lab.

(7) You went to Harvard? Or to some other Ivy League school? Who cares? This game is all about what you can do with technology, and your Valley connections, not with old-school East Coast elite collusion. If you went to San Diego State, or some cow college in the Punjab, you’ve still got a shot.

The Facebook story? At best, only #4 and #5 apply.

To be clear: I’m not mistily nostalgic for The Old Way. It’s ugly, intensely stressful and demeaning in countless ways. I had to get out of it in part because of these aspects. But I have to admit it worked.

Facebook might faceplant, but in the meantime it stands as an unfortunate model to young entrepreneurs, as did so many of the dot-com payouts.

And, as usual, the salutary role of government is obscured. Essentially all of the technology upon which FB is built was developed by a combination of government-funded research and the venture process I outline above. And it’s not just R&D, corporate and otherwise. Follow the money. If it weren’t for a very stable form of employment — working for the government — and regulation of the pension funds for such employees,  the institution of venture capital would probably be vestigial. Forbidding institutional investors from taking a lot of crazy risks with the bulk of Other People’s Money is, over the long term, the only way to ensure the freedom to take any crazy risks with any part thereof. It’s also pretty hard to pile up enough risk capital in any other way.

The only thing Wall Street brings to the table, in terms of offering eventual value to us hapless end-users, is that it can establish a given company’s offerings as the de facto standard for the industry. These standards help economize on further development, and help level the playing field for new entrants. (Somewhat, anyway — but are we really better off that MS-DOS, then Windows, became de facto standards?)

And my fecomantic prognostication? Well, the not-very-tech-savvy used to tease any would-be tech entrepreneur with “So you want to be the next Bill Gates?” This caused those among them who passionately hated Microsoft and all its works to bridle and sneer. Now, for a little while at least, it’s going to be this: “So you want to be the next  Mark Zuckerberg?”

And too many young sociopaths, only too happy to trade on people’s class insecurities, will say, “Yes.” In all seriousness.

And they’ll violate privacy, copyright,  patents, confidentiality agreements, non-compete agreements and their friends’ trust, if that’s what it takes. Yes, the movie was just a movie. No, Mark Zuckerberg is not a sociopath. But there’s much in the real story that was bad enough.

Superfuckyounomics – how to cash in if your first book deal didn’t do it for you

I owe the term fuckyounomics to d-squared at Crooked Timber, who subtitled his imaginary pop-econ best-seller, “How Nobody In The World Knows Jack Shit Except Economists.” At that point, the allusion to Levitt and Dubner’s original best-seller should have been clear enough to anybody: even in 2008, Freakonomics was still all over the charts.

In 2009, Superfreakonomics appeared. The Olympic-class shark-jumping implicit in its subtitle (“Global cooling, patriotic prostitutes & why suicide bombers should buy life insurance”) pretty much ratified d-squared’s sneer, as far as I was concerned. Maybe I would be better off not reading either book? Maybe I should instead spend my econ-specific reading time on more serious figures in the field, on less mercenary scholars who might actually be trying to tell us something useful?

I took a pass.

But then a friend pressed the latter title on me recently. Well, OK. Could be fun, right? Probably harmless anyway. I vaguely recollected some controversy over the book’s treatment of global warming debate, and mentioned this to my friend. He earnestly assured me that the chapter addressing AGW was a must-read. Hmm.

I’d been looking forward to a little light reading, and had planned to read Superfreakonomics from beginning to end. But I made the mistake of turning to the Global Cooling chapter first.

Oh boy.

Right out of the gate, Levitt and Dubner treat an ephemeral nine-day-wonder piece of bad science reporting in Newsweek in the mid-70s as if it represented anything like the prevailing opinion of climatologists at the time. In 2006, fully three years before Superfreakonomics appeared, Newsweek issued a retraction of this story (31 years late; better late than never, I suppose.)  By way of apologia, a Newsweek editor wrote:

“How did NEWSWEEK—or for that matter, Time magazine, which also ran a story on the subject in the mid-1970s—get things so wrong? In fact, the story wasn’t “wrong” in the journalistic sense of “inaccurate.” Some scientists indeed thought the Earth might be cooling in the 1970s, and some laymen—even one as sophisticated and well-educated as Isaac Asimov—saw potentially dire implications for climate and food production.”

No soap, Newsweek.

Note the “all the kids were doing it” excuse that their main competitor had stooped to similar bad science reporting. (The same Time magazine occasionally runs cover stories with titles like “Is God Dead?“, “Was Marx Right?“, and even “Is the Unabomber Right?” These days, folks, that’s called linkbait.)

“Even” Isaac Asimov got it wrong? What? And he got it wrong in the direction of what would have been the more exciting science fiction novel premise? And, uh, Asimov made his name in which genre, originally? (I still remember my great disappointment upon learning that Asimov changed his major in college because he couldn’t hack calculus. Which is really not that hard. He wasn’t exactly burnishing climatology credentials later in his career, as he cranked out his Guides — to the Bible, to Shakespeare, to … oh, I’m getting sad again.)

Conveniently left unmentioned by the not-quite-repentant Newsweek: the scientists who “thought the Earth might be cooling in the 1970s” had been talking in terms of tens of thousands of years. The idea that such a cooling trend would be any kind of threat to humanity in the foreseeable future was definitely not what they’d proposed. No, that was the breathless fabrication of some ambitious Newsweek hacks. Climate scientists of the time who studied near-term global climate change were at that point already pretty concerned about the implications of massive industrial-society GHG emissions. When they saw these magazine covers, they probably just frowned, growled “what is this popular-science crap” and went back to cranking out more FORTRAN code on punched cards.

OK, I grit my teeth. I do not give up so easily. Surely, the chapter would get better.

Oh boy.

Next up: Levitt and Dubner cite James Lovelock as if he were somehow today’s last word in climatology. Well, Peter Stott of the Hadley center has said that Lovelock was “too alarmist” in The Revenge of Gaia. (Most climatologists are also a little leery of the hypothesis for which Lovelock is most famous.) Lovelock was always a little out there, and recently, he admitted it. I start to wonder: are Levitt and Dubner ever going to quote an actual working climatologist’s statement as made in a peer-reviewed journal?

I press on. This is no fun. But winners never quit and quitters never–

Blam: “The world’s ruminants are responsible for about 50 percent more greenhouse gas than the entire transportation sector.”

I hang my head: I’m reading Factoids Uber Alles. This particular canard is an overstatement of a misstatement in a 2006 FAO report (“Livestock’s Long Shadow“) whose main author had long since backed down on the claim. Even the later defenders of the report, in trying to debunk the debunking of the FAO report, can only come up with about half of the FAO’s original figure, and even then, they could only do it in the case of the U.S. The U.S. livestock GHG emissions aren’t representative. The full-lifecycle carbon intensity of the livestock sector in the U.S. is higher than in most of the rest of the developed world, which in turn is higher than in the developing world.

OK, now, I’m here not to debunk a lot of points Levitt and Dubner try to make. Or even to show where these authors appear to be, at best, disingenuous. Plenty of people already did that, years ago. Besides which, this blog post would be longer than Superfreakonomics‘ whole Global Cooling chapter, if I tried take on every ridiculous thing in it.

No, I have different mission here. A fecomantic one: how did this shit happen?

Let’s try some hypotheses:

(1) Incompetence: Levitt and Dubner aren’t scholarly enough.

Bzzt. I don’t even have to inhale to get this statement to flunk the sniff test. Levitt’s a recipient of the coveted John Bates Clark medal. This award has been described as harder to get than an econ Nobel, since it goes only to one recipient, only every other year.  (Indeed, many who win it go on to win a Nobel.) He knows how to do homework. He’s been doing homework all his life, and he’s been doing homework for a living.

Dubner is a columnist for the New York Times; those people know from fact-checking.

(2) Ignorance: Levitt and Dubner are so out of touch with the debates swirling around AGW that they just took some clippings handed to them by an unpaid intern, spiced them up, and incorporated them into the book.

Bzzt. I’m sorry, but this goes under Incompetence. Any statement about AGW is going to be controversial (even the statement that it’s controversial.) How could they not know this? Thus, every statement on AGW cries out for fact-checking — which these guys know how to do. Any decent editor would make sure they were on the stick. But speaking of editors ….

(3) Coercion: when their editor saw the first draft of a carefully fact-checked chapter, it was nixed, with a red pencil notation saying, “Not enough red meat for the core audience of this book, which — as I must have told you 50 times now — is glibertarian, iconoclastic young men who delight in soundbite-length contrarian opinions.” When Dubner and Levitt took a close look at their contract, it revealed that the editor did, in fact, have the right to make such demands.

Plausible? I give this hypothesis about a 50%.

(4) Cooperation: notwithstanding whatever their personal opinions might be on AGW, they know that any publicity is good publicity, and that a string of tissue-thin factoids leading off this chapter, together with climate-skeptic sympathies expressed throughout, would feed word-of-mouth for book sales the way gasoline feeds a bonfire.

To put it another way: Superfuckyounomics. I could swallow this hypothesis, too. Even though it’s disappointing as hell.

(5) Belief: they are both full-on climate skeptics, and they didn’t go pawing through the technical details of the debates, looking for solidity, because they didn’t want to get all, like, confused and shit.

I’m sorry but I need to file this one under Ignorance and Incompetence. And you know how I stand on those hypotheses.

I admit there are some grounds for doubt about Cooperation and Coercion. For example, as far as I can make out, Levitt and Dubner say nothing whatsoever about ocean acidification in this chapter. They base their Nathan Myhrvold fanboi argument for geoengineering solely on the warming threat. And Nathan’s so smart, right? Hook, line and sinker.

But how do they not know? Industrial society CO2 emissions are not just a big experiment with the only atmosphere we’ve got — they are also an experiment with the only oceans we’ve got. Even by 2009, the results were coming in and they didn’t look good. Dubner, in particular, could hardly have missed the issue of ocean acidification. He was a writer for the paper of record as it reported on it. More likely, failing to mention it in the published chapter was a matter of either striking such mentions as inconvenient in marketing terms (Coercion) or knowing what not to say in the first place (Cooperation.)

Now, this blog is called fecomancy because I try to predict the future based on how people are shitting us. In the ever-shifting climate of opinion, I’m trying to carve out a little climatology niche. (Selling brown-stained, stinky, used umbrellas on eBay didn’t work out.) It’s pretty clear to me that Dubner and Levitt were shitting us, either out of pure mercenary intent or because they didn’t read the fine print. What does this mean for the future?

My fear is: more of the same. Here’s why:

Your average climate-skeptic layman is about as intelligent and educated as your average AGW believer — however little your average AGW believer would like to think so. That is, the more you believe that you’re too ignorant or not intelligent enough to have a decent opinion about AGW, the more likely it is you’ll just shrug and say you don’t know. But wait: the intelligent and educated all have access to the same facts. Why are they drawing such wildly different conclusions?

There’s a clue, I think, in the Dunning-Kruger Effect, which says that the incompetent overestimate their abilities (while the competent slightly underestimate theirs). Climatology is, in actuality, a domain so complex and specialized that the overwhelming majority of those who consider its conclusions must do so on the basis of their beliefs about how science works — and about how it fails. Now, it would be foolish to suggest that science is never influenced by politics, especially where science touches on policy questions. The question is whether that’s happening here.

I believe that the overwhelming majority of those who have an opinion on AGW (one way or the other) are simply overestimating their ability to judge how well the science is being done. What’s striking, however, is how the skeptics cling to arguments long since demolished, and even to arguments nobody ever made. Even the really smart skeptics fall afoul of the Dunning-Kruger Effect. For example, Freeman Dyson has lamented that most climate scientists don’t realize that CO2 forcing effects are less-than-linear. (In fact, all climate modelers know this.) Even MIT’s semi-skeptic climatologist, Richard Lindzen, recently said that if we wait a while longer to see what’s really happening, we can always make the necessary changes to offset anything bad. (In fact, thermal coupling with the oceans means that warming is substantially delayed — once you slam on the brakes, you’re still fated to skid through the next six red lights.) And if Lindzen and Dyson have such feet of clay when they tread into this area ….

I hereby predict: we’re never going to run out of high-profile big-name bullshit on this issue.

But I knew that. I guess what really disappoints me is that I was hoping to have fun reading Superfreakonomics. Instead, I’m going to be compulsively checking every purported fact.


The Fed Needs a Fecomantic Moment

Let’s face it: Ben Bernanke is never going to get an FDR Moment. First: Bernanke isn’t President of the United States. Second: FDR wasn’t announcing some fancy-schmancy macroeconomic goal like a Nominal GDP target. No. He was announcing class warfare and humongous jobs programs.

Can such a message work now?  No. As bad as things are in the U.S. economy, it’s not like Spain (with its Depression-level unemployment rate but European social safety net), much less like America in 1933. Bernanke can’t rattle political sabers with crowd-pleasing rhetoric about the rich, he can’t roar “I welcome their hatred!” to a crowd that roars back in approval. Nor can he get major new spending programs. Not even Obama can do that.

Announcing a higher nominal GDP target isn’t going to work unless people with lots of cash and relatively bright prospects wake up and say, “Uh-oh, Ben really means ‘give me higher inflation or else’. We’d better get cracking and find some slightly better-yielding positions than cash, before inflation starts to chew holes in our piggy banks.” But who would these hypothetical people be? Would they be corporate treasurers? The operators of major funds? The 0.1%? No. Their first question is: “Or else what, Ben?”

Here’s what Bernanke must do: he has to hit Main Street, not Wall Street, and he has take advantage of the economic ignorance of the American consumer. He has to go out there and say, “I’m sorry, but I just can’t keep the raging fires of inflation under control anymore, not after printing all this money. My best estimate now: it’ll rise to 4%. Only there can I hold the line.”

Now, the fact is, there are no raging fires of inflation. Employment is far below the level required to ignite a wage-price spiral. And printing money (“quantitative easing”) in a liquidity trap gives you very little inflation in a slump. (Exhibit A: Japan. Colossal QE, but still in deflation.)

But there is a golden opportunity here to encourage ignorant Americans in the fallacious metaphor of Inflation as A Process of Combustion Kindled by Printing Press Output. Gasoline prices have gone up, and food’s not cheap either. That’s what they think inflation is. Of course, food and energy prices are volatile, which is why they are stripped out of the Consumer Price Index. But your average American consumer doesn’t think that way. No, your average American believes Greenspan spent his days keeping The Great Inflation Firestorm at bay. If there was any inflation after that, it was because Alan, for all his (supposed) brilliance, was still just this old guy who had to sleep a few hours a night, maybe even take weekends off now and then. Your average American thinks Bernanke is just a wimpier Greenspan. Fecomantically: that’s what you want them to think.

There are Americans who don’t fear for their jobs very much. What’s needed now is for them to go into their next performance review and hint darkly to their bosses that if they don’t get a 4% cost-of-living increase soon, they’ll be eyeing greener pastures. There are Americans who have a lot of cash piled up, who are fearful of the volatility of the stock market. You want them to look askance at the measly earnings on their safe-as-houses-used-to-be portfolio, and hint darkly to their investment advisors that they’ll be shopping for better help if they don’t see a 7% nominal return by year-end.

What’s lacking are Americans who could see through a Fecomantic Ben and his natterings about a 4% conflagration, who can see that he’d only be trying to articulate an otherwise-toothless prophecy convincingly enough to make it self-fulfill. And this is a good thing to lack.

But what if they wise up fast? I’m not worried. Sure, readers of the Wall Street Journal might get clued into Ben’s game. Maybe readers of the New York Times as well. But those readers are like Lake Wobegon’s children: all above average. And they’ll overestimate the ability of others to see through the game, indeed, it’s the way of the world.

Most Americans are instead watching MSNBC or Fox. And when Bernanke announces 4% inflation coming soon (perhaps throwing a damp towel into a laundry hamper next to the podium, for emphasis), the talking heads on those channels will go ballistic, they will howl for Bernanke’s scalp, they will point fingers left and right. Their audiences will believe those talking heads. Those in the audience with some bargaining power at work will go out and demand raises. Those with some purchasing power in their bank accounts will go out and start to spend their cash hoards. That would create more of what economists call Aggregate Demand. And that’s what missing now.

There’s no Volcker Moment for Bernanke. There’s no FDR Moment. There sure as hell ain’t gonna be a Bernanke Moment. Only a Fecomantic Moment will do.

Somalia – not an anarcho-capitalist paradise, but ….

Read Somalia After State Collapse: Chaos or Improvement? (PDF). Yes, right now. OK, whenever. But read it.  It’s eye-opening.

The authors make a good case that — one of our favorite YouTube videos notwithstanding – if you could somehow infect the rest of Africa with a Somali-style social order, Africans would benefit by a number of measures. (But sadly, not by all. And is having cellphone service that delivers you-can-hear-a-pin-drop-on-the-other-end worth a higher infant mortality rate?) Yes, I know: doing better with no national government than most Africans do with one — that’s not exactly a high bar to clear. But this doesn’t mean Somalia has no lessons for us all.

Particularly fascinating is Somalia’s functioning national paper money system sans national government – yes, with no treasury, no central bank, no mint. How is that possible?

It turns out, after the collapse of the national government, nobody printed new money for a while, so the value of notes was stable. People kept using them. Why not? They worked. Habits die hard.

Then the inevitable counterfeiters weighed in, bringing the inevitable inflation. But Somalis refused notes with higher denominations than they’d seen  in government times – those had to be not only counterfeit but certainly hugely profitable for counterfeiters. Bad enough that those guys are inflating our currency, right? Why make them richer? This capped counterfeiter profit.

Between the inflationary pressure of counterfeiting and that high-denomination cap, the Somali shilling inflated to the point where the value of the notes barely exceeded costs – payroll, production inputs, equipment maintenance, transport and insurance – for the counterfeiters. Yes, believe it:  as a “counterfeiter” in today’s Somalia, you’re actually providing a useful product (a credible-looking token of stored value for exchange) at a fair profit. You’re re-supplying the market for new ones as the old ones get too torn as you quarrel over them in the market. Or bullet-riddled and blood-drenched after you scramble out of the way of the “militia” you were selling drugs to, just before they opened fire on a rival gang. Or into the stew pot accidentally, when they fall out of your shirt pocket as you leaned over to inhale the aroma. Or soft enough from repeated use that their toilet-paper value finally exceeds any other possible value, at least in certain awkward moments when one is caught unprepared.

It gets better: unlike with gold, demand and supply of Somali shilling notes will tend to balance as the economy grows. Little or no inflation. Also little or no deflation (which can also be bad) since the “counterfeiters” will see deflation as an opportunity for more profit (in real terms), will crank up the printing presses to try to steal a march on their rivals, and — will reflate the currency in the process. Years from now, public health experts and WHO epidemiologists will thank the Invisible Hand  for reducing transmission of certain diseases, by invisibly but continuously handing Somalis their emergency toilet paper.

The resistance to higher-denomination notes was not total, however: Somalis will use U.S. dollars for higher-value transactions, since Somali shillings for any big-ticket item form quite a stack, and USD has been reasonably solid over the last two decades. But let’s not ask what Somalis do to earn those dollars, shall we? (If “earning” is quite the word I’m looking for.)

The paper’s discussion of clan-based legal and insurance systems is also very interesting. And it’s full of fun facts to know and tell.  For example, did you know that committing murder in Somalia, a heavy rap anywhere, will set you back 100 camels if you killed a man, but only 50 if you killed a woman? That if one of their freelance judges (see Xeer) renders a decision commonly viewed as bad or corrupt, his market rankings go into the toilet and he’ll soon be out of work if he doesn’t clean up his act? As for the clan-based insurance guilds – whoa, don’t tell Wall Street about this, they’ll go in and spoil it all, with fancy rocket-science financial instuments with fancy names like Black Hawk Down and Put This in Your Khat Pipe and Smoke It.

I’d like to verify all this stuff, since the source is decidedly Libertarian-partisan. However, if its Google Scholar citation index can be relied on, it has been taken pretty seriously in some quarters. If it’s all just crap, well, they sure fooled Fecomancy.

Don’t trust anyone under 30 to invent anything

“The radio, the telephone, Facebook—each of these inventions changed the world. Each of them scared the heck out of an older generation. And each of them was invented by people who were in their 20s.”  Daniel H. Wilson, author of Robopocalypse, in the Wall Street Journal, June 11th 2011.

Now wait a minute.

Radio: was it invented by David E. Hughes, by Oliver Lodge, by Heinrich Hertz, by Édouard  Branly, or by Guglielmi Marconi? We could argue about that, but not about the following: Hughes was in his late 40s when he played around with radio wave transmission and detection; Lodge was in his early 40s when he demonstrated radio’s potential for communication; Branly was about 40 when he started work on the Coherer. Hertz was admittedly not quite 30 when he verified electromagnetic theory but, oddly, he saw no other use in his experimental apparatus. Well, to be an inventor, you have to be trying to invent something, right? It was Marconi, in his early 20s, who made longer-distance radio telegraphy practical, but he was standing on the shoulders of giants (including  Augusto Righi, under whom Marconi learned the relevant physics about Hertz’s work) to reach high enough get all the theoretical and technical components in place. Yes, that’s right: Marconi invented nothing in making radio work, he just doggedly assembled stuff and tried it out.

The telephone: was it invented by Innocenzo Manzetti, by Antonio Meucci, by Johann Philipp Reis, by Elisha Gray, or by Alexander Graham Bell? We could argue about that, but not about the following: Manzetti was in his late 30s before reporting any success (if you can call it that — vowels only?); Meucci was in his late 40s when he got some sort of intercom system working in his house; Elisha Gray was 39 when he got into that still-disputed patent race with Bell, and Bell used Gray’s liquid-receiver techniques to demonstrate clear-speech telephony in 1876, when he had a few months to go before turning 30.  Reis? Well, OK, in his mid-20s. But he got nowhere.

So who invented social networking websites? Obviously it wasn’t anybody at Facebook, which has left several predecessors in the dust. A likely candidate is Andrew Weinreich, who founded SixDegrees.com in the mid-90s (prematurely, by his own admission) and who  is named first (of 13 co-inventors) in a patent assigned to that company. From what little I can put together about Andrew’s life, he was perhaps 26 or 27 when he started SixDegrees.com. There you have it: a blazing young genius.

The first question you should really ask, however: is web-based social networking actually much of an invention? Is there much about it that wasn’t blindingly obvious, simply waiting for enough iterations of Moore’s Law to become practical? (If SixDegrees.com had an Achilles Heel, it was that the digital camera was not yet ubiquitous.) Every patent is supposed to justify an inventive step in terms of what other patents do not cover. The prior art discussion in the SixDegrees patent filing is desperately brief: it only gestures vaguely at a free e-mail service of the time, noting its shortcomings. Apparently what is claimed is a database of people who negotiate direct links with each other bilaterally. Perhaps the obviousness of that idea is 20-20 hindsight. Clever, yes. Inventive genius? Hm.

Notably, Weinreich had no particular tech chops — he’d only had a few years of experience in financial analysis and in law.  Marconi might not have truly invented radio, but he was elbows-deep in technology. Six Degrees foundered in part because of the walloping amounts of money it paid to other companies to develop its website.

It was because Fannie and Freddie held SUPER-subprime mortages….

Yes, that will be the next Wall Street Journal / American Enterprise Institute / CATO / Fox News line on how the socialist U.S. government caused the financial crisis. At least, that’s what I gather by extrapolation from  Rortybomb’s analysis …

I’m increasingly embarrassed that I ever called myself a Libertarian. C’mon, people, deal: bubbles happen, and that’s a market failure. Your best economist, Vernon Smith, has made a career of demonstrating this, and even proposes a delightfully simple Regulation Lite market solution.

But before we get into that, glibertarians, think: was there a Federal National Bulbous Perennial Commission (“Fannie Boopsy”?) at the root of the Dutch Tulip Mania? I mean, you guys aren’t crazy enough to argue that was caused by government meddling in markets, are you?  Oh, wait: you are.

Dedicated to the proposition ….

… that there’s often more predictive power in analyzing lies and self-deception than, well, almost anything else that’s barely worth your time. (Somehow, I think Cassandra was way ahead of me.)