Sunday, December 30, 2012

The Rocktagon Recap of UFC 155: dos Santos vs. Velasquez 2!

Featuring Joe Lauzon painting the canvas with his own platelets, a Cyprian Badass derailing one of the American variety and oh yeah, a NEW World Heavyweight Champion!



Well, this is it, folks: the last MMA show of 2012, unless you want to count those freak show pro wrestling cards they hold on New Year’s Eve in Japan, anyway.

UFC 155 is sort of an auspicious way to end/begin the year, as it features one main event that actually matters and a whole bunch of fights that, on paper, don’t seem to mean much of anything to anybody. I guess you could call the show the definition of a one-fight event, but seeing as how that one fight is probably the best heavyweight match-up available for at least another year or so, you got to figure it’s a fight that counts double, maybe even triple, as a headliner.

Enough yammering from me, who is ready for some heavyweight tomfoolery! Me, too kids, so let’s hear it for UFC 155: dos Santos vs. Velasquez 2!

The show itself is emanating from Las Vegas, Nev., while I’m calling it LIVE from the finest pub in all of K’Saw, Ga. Man, I sure do love smelling like ketchup and Marlboro Reds for three days afterward!

Our hosts are Joe Rogan and...John Anik? Hey, where's Goldie at?

MIDDLEWEIGHT BOUT
Chris Leben vs. Derek Brunson

Leben comes out to the Chili Peppers' cover of  "Love Rollercoaster," and the first person to tell me the 1996 movie that featured the song wins...something. Leben hasn't fought in about 13 months, and Brunson is coming fresh off getting murder-death-killed by Ronaldo Souza. Needless to say, this was a very uneventful fight, with Brunson scoring low-takedown after low-takedown while Leben tossed his chunky ass overhand rights at approximately no miles per hour for fifteen minutes. Both dudes were gassed after the first round, and Leben looked like he needed an oxygen tank when the final bell clanged. A unanimous decision victory for the Strikeforce transplant, although it's not exactly what I would call an impressive performance in any regard. 

Lots of "stars" in the house tonight, including MC Hammer, Mike Tyson and that redheaded chick from "That 70s Show," who is now a dark haired goth chick that wears a lot of pink lip gloss. 

MIDDLEWEIGHT BOUT
Alan Belcher vs. Yushin Okami 

Belcher out to "Little Wing" by Hendrix. Kind of a wasted opportunity here, since he has the Johnny Cash tattoo on his arm and everything. "Dirty Old Egg Sucking Dog" would make for such a bad-ass walkout tune, you know. 

As with the previous match-up, this one was pretty darn boring, for the most part. Okami kept shooting for takedowns, but he kept landing so that Belcher fell on top of him. Yushin pretty much dominated as far as striking and octagon control went, but there were a few moments where it looked like Belcher might bust loose (spoiler: but he never did.) A really lackluster bout, with Okami taking a unanimous decision victory. 

MIDDLEWEIGHT BOUT
Costa Phillippou vs. Tim Boetsch


Costa out to Twisted Sister, Boetsch out to "American Badass." Probably the best fight of the show so far, but that's not really saying a whole lot. 

It seems as if Boetsch broke his hand heading into the second, and his Cyprian adversary took full advantage once things picked back up. Boetsch gets busted open on a headbutt that was never called, and a finger to the eye makes the American middleweight even groggier looking. Late in the third, Boetsch shoots for a desperation takedown, and Phillippou simply pounds him out once he's flat on the canvas. Not so much an impressive victory for Phillippou as it is a testament to just how shitty Boetsch's conditioning is. 


LIGHTWEIGHT BOUT
Jim Miller vs. Joe Lauzon

Our fight of the night, and easily the bloodiest fight since Velasquez made Antonio Silva's head gush out fifteen gallons of blood back in May. Miller just UNLOADS on Lauzon for the first four minutes, and Joe's head turns into a piece of metaloaf with extra-ketchup. Second round begins with Miller shooting for a takedown, and Lauzon almost securing a triangle. The ref stops the fight so a piece of Lauzon's glove can be cut lose. The second ends with Lauzon on top, at one point, even landing a mini-slam on Miller. 

The third and decisive round began with Miller just peppering Lauzon, who sprung another leak early. It went back and forth for a while, until Lauzon landed a DOPE looking flying leg lock, which Miller easily escaped from. In the waning seconds of the third, Lauzon unsuccessfully tries for a guillotine choke. 

A ridiculously entertaining bout, with Miller picking up the unanimous decision victory. Probably the right call, but I have a hard time scoring the second round for Miller. 

Hey, you know that movie, "Movie 43?" Well, it looks like shit. 


UFC HEAVYWEIGHT CHAMPIONSHIP BOUT
Junior dos Santos (Champion) vs. Cain Velasquez (Contender)

Cain out to mariachi music, dos Santos out to "Gonna' Fly Now." Whereas the first bout was over in less than two minutes, this one went the distance, and it was pretty awesome. Or at least, as awesome as a one-sided drubbing could possibly be, I guess.

First round, and Velasquez is gunning for a takedown. He ultimately ends up landing 11 out of 33 during the fight, which should you tell you this dude ain't exactly GSP when it comes to grappling ability.

So, anyway, Velasquez just tags the hell out of dos Santos in the first round, with the challenger coming dangerously close to ending the fight. If the guy officiating the bout would have been anyone OTHER than Herb Dean, it probably WOULD have been stopped in the first, too. Alas, dos Santos weathers the storm - literally WOBBLING in and out of his corner going into and coming out of the second round - and he proceeds to get his face re-arranged for the next five minutes.

The frank reality here is that Cain did so much damage in the first round that it was pretty much impossible for dos Santos to surge a comeback. However - and this is a biggie going into Velasquez's next bout - Cain himself looked visibly blown by the third round. As impressive as Velasquez's mauling was, I think dos Santos inability to fall down and die like a normal human being is either indicative of Cain's in-need-of-improvement cardio OR maybe, just maybe, that all Brazilians are actually Highlanders or something.

Improbably, dos Santos gets a second wind heading into the fourth, but he can't really throw anything that would stop Velasquez. It was more of the same in the fifth, with the bout concluding with Cain front kicking dos Santos in the face - sort of a symbolic "yeah, I just kicked your ass, and also, more literally, your face" gesture to conclude the bout. Velasquez ends up reclaiming the belt, scoring a unanimous victory across the board. He said it was the most difficult fight of his career in the post-fight, while dos Santos - despite looking sort of like a blood-soaked Squidward from "SpongeBob Squarepants" - promises retribution somewhere down the line.

SO, WHERE DO WE GO FROM HERE? Pending Overeem can surmount Bigfoot at UFC 156 (and subsequently pass a drug test, which is really the hard part here), it's pretty much a given that Velasquez will throw down with the "Demolition Man" for his first title defense sometime in 2013. A beaten, but not conquered, dos Santos will likely do battle with either Daniel Cormier or Josh Barnett, once the two make their respective leaps to the UFC, with Barnett the more likely of the two candidates, since Dana absolutely hates his guts and wouldn't mind seeing him get steamrolled right out the gate. Jim Miller has solidified himself as a top ten lightweight, so I wouldn't be surprised if he got Gray Maynard as his next opponent. Lauzon, turning in a star making performance tonight, will most likely see another lightweight stalwart in his next match-up...the recently de-contenderized Nate Diaz. The middleweight fray got shook up quite a bit tonight, so why not book these fights accordingly: Yushin Okami vs. Cung Le, Alan Belcher vs. Tim Boetsch and Costa Phillippou vs. Hector Lombard? 

THE VERDICT: Up until the last two fights of the evening, this thing was on the fast track to being one of the least memorable shows in recent UFC history. Strange what a little gore-soaked mayhem and tough-as-nails dog fights can do to rectify the ennui, huh? Certainly, the last two fights made the show, although there was some pretty cool stuff on the undercard (the Varner/Guillard bout and Todd Duffee knocking a dude out), that I didn't get to see. Probably not what I would consider a great show by any means, but you sure as hell get your money's worth for the last hour or so of fighting.

SHOW HIGHLIGHT: The Miller/Lauzon fight was utterly astounding, and the Velasquez/dos Santos title fight might just be the most memorable heavyweight throwdown since Lesnar/Carwin at UFC 116. 

SHOW LOWLIGHT: I paid actual money, people's money, to watch Derek Brunson beat up on Chris Leben, who apparently, was trapped in a bubble of invisible Jell-o the entire evening. 

ROGAN-ISM OF THE NIGHT: "He's just donating blood!" - uttered the first of an estimated 985 times that Lauzon's forehead gushed out plasma into the third row. 

FIVE THINGS I LEARNED FROM TONIGHT’S SHOW: 

  • The human body contains approximately 3,000 gallons of blood. 
  • It's probably a good idea to learn how to defend against takedowns if you're a striker. 
  • The Japanese really don't like Johnny Cash, for some reason. 
  • Getting 100 significant strikes and 10 takedowns in a fight is the MMA equivalent of a 200 yard rushing day, with 100 yards receiving. 
  • Brown Pride > War Brasilia (well, this time, anyway)

Well, that’s all I’ve got for this week. Crank up “Never Write” by Dynamite Hack and “A Little Respect” by Wheatus, and I’ll be seeing you folks in a few.

Wednesday, December 26, 2012

CHRISTMAS CRUNCH!

It’s Cap’n Crunch the Way You’ve Never Seen Him Before (Even Though It Tastes JUST LIKE the Regular Cereal, But Still!)


Good lord, the troubles I had to go through for this cereal.

I saw "Christmas Crunch" last year, and thought about picking it up, but didn’t. I waited an entire year and saw a big cardboard container of it at a local grocery store a few days before Thanksgiving, and once again, I thought about renewing my inventory, but I didn’t. So, the next week, I went out in search of the Cap’n Crunch variation, and you know what I found? Nothing. Absolutely nothing, anywhere. I spent the better part of a month just combing through local retailers, and not a single damn one had the stuff on their shelves. Ultimately, I did end up finding a box at one grocer, but the box literally looked like it had been chewed through by a rat or something. I thought about picking it up anyway, but since I’m not really that big a fan of the Bubonic Plague, I had to reshelf the item at the last second.

And so, about a week before Christmas, I finally found a non-rodent-chewed box, and this time, I knew better. I snatched it up, I locked it in my trunk, and I kept that thing well-guarded like it was the Stanley Cup or something. If I had to wait until 2013 to taste this stuff, I thought to myself, then I’m not quite sure I have the internal motivation to press through such a long moratorium sans seasonal Crunch in my life.

I don’t know if you’ve noticed, but there are a lot of Cap’n Crunch variations out there. And also, it wasn’t until recently (as in, the past month) that I realized that the actual brand name was “Cap’n Crunch” and not “Captain Crunch,” as so many souls are often prone of calling it. And, uh, if you read my review of “Halloween Crunch” back in October, uh, yeah, let’s talk about Christmas Crunch, why don’t we?


I guess a good place to begin is the packaging. Not surprisingly, the box feature Cap’n Crunch decked out in a Santa suit - I guess because depicting him as Jesus Christ probably would have ruffled more than a few feathers.


The big hook with “Christmas Crunch” is that it really doesn’t have a hook to speak of. All in all, it’s just regular old Cap’n Crunch, only with a couple of red stars and green Christmas trees thrown into the mix. And they’re not even marshmallow addendums, either; we’re talking cereal bits that taste JUST like the main product, only shaped and colored differently.



As far as the rest of the packaging goes, it’s quite basic. You’ve got your nutritional info on one side and ads for the million billion other Cap’n Crunch permutations out there on the adjacent panel. The top and bottom flaps of the box say pretty much the same thing, which is, fundamentally, nothing at all.


The back of the box, however, is kinda’ the exception here. For one thing, it says that there are “five” differently shaped cereal bits inside every box, including some red snowmen and a Santa hat, but I didn’t see anything within my cereal that came close to resembling either. Maybe it was a last second excision, or perhaps cereal-crafting technology isn’t advanced enough to give us adequate Santa hat corn puffs yet?


The backflap also suggests that you go out and buy a gingerbread house, and use the cereal bits as, among other things, shingles and landscaping foliage. A cool idea, I guess, but I think I have a better one; how about instead of using “Christmas Crunch,” you open up one of those boxes of “Halloween Crunch” you’ve been hoarding since fall and make a HAUNTED gingerbread house instead?


And, onto the cereal itself. It’s quite festive and colorful, no doubt. I’ve never really thought of yellow as being a Christmas color, but it’s not too sore a sight on your peepers, either. And if you see anything in there that resembles a snowman or a Santa hat, please encircle it with a bright black marker and e-mail me the photographic evidence.


Yeah, there’s not too much to say about the cereal, as far as aesthetics go. For whatever reason, I keep getting a trail-mix vibe here; although, for the life of me, I’ve never had a bowl of trail mix with sugary pine trees in it before.


And there’s even LESS to say about the taste of the product. If you’ve ever had Cap’n Crunch before, well, this stuff tastes EXACTLY like what you’ve already eaten before. And unlike “Halloween Crunch,” you don’t even have the incentive of radioactive green milk to keep you glued to your cereal bowl. It ain’t bad, by any stretch, but the “special” attributes of the product, I am afraid, are limited to purely cosmetic differences.


The Herculean task of finding a box of this stuff was probably several million more calories than anyone should ever expend in quest of a breakfast item - I’m convinced that Ah-nold put in a lesser effort trying to find a “Turbo Man” doll in “Jingle All the Way” - but I can now say that I’ve tried TWO different Cap’n Crunch variations explicitly tied to two different holidays, when most of humanity can never say that they’ve tried just ONE. The final product wasn’t too exciting, but this gimmick opens up the door for untold possibilities in the future. Easter Crunch? St. Patrick’s Crunch? Independence Crunch?

Yes, please. Yes, so hard.

Monday, December 24, 2012

A Round-Up of the Seasonal Foodstuffs of Christmas 2012

‘Tis the Seasonto Enjoy Oddly Shaped-and-Tasting Limited-Time-Only Candies!


For the last two Halloweens, I’ve done a round-up of the limited-time only, seasonal foodstuffs that come out around the Samhain season, and since so many limited-time, Christmas-themed candies are out on the market this year, I decided, what the heck, why not review a couple of those, too.

Let me start off by saying that there is a DELUGE of Christmas-themed variations out this year. Seriously, take a stroll down the seasonal aisle of the neighborhood big box store, and you’ll encounter literally DOZENS of permutations per brand of popular offerings like M&Ms and Oreos. Virtually every major candy bar I can think of has at least one Yuletide variation out there, from Snickers shaped like nutcrackers to these weird Butterfingers coins imprinted with miscellaneous Christmas images. Clearly, there’s no shortage of holiday-branded, limited-time-only items for you to load up in your shopping cart in 2012, so I decided to just snag five completely random foodstuffs and give them a proper look (and taste) see. The end results? Some good, some bad, and a whole hell of a lot of unusual among them both.


First up, we’ve got Reese’s Trees, which, if you couldn’t tell, are supposed to be Reese’s Peanut Butter Cups, only shaped like Christmas flora. The company tried something very similar last Halloween, only substituting pumpkins for trees, and yeah, I was a pretty big fan of the reshaped goods.


I’ve noticed that the packaging for the products fluctuate pretty heavily, so you have tons of options as a consumer. You can pick up a single cup for about 50 cents, or an entire burlap sack for about five dollars. There’s nothing really different about the candies, other than the shapes, so if you’re anticipating any kooky flavors (Fir? Cedar?), I’m afraid you’re setting yourself up for disappointment.


Maybe the local Target employees like to run down the aisles with battery-powered hairdryers, because literally every individually wrapped candy in my bag was heavily warped to some degree. Like snowflakes, no two Reese’s Trees appear to be exactly alike, and for that matter, very few of them even remotely resemble Christmas trees. Upon further review, maybe they should’ve called these things “Reese’s Noses” instead?


It really wouldn’t be a proper holiday season without at least one limited-time only Pop-Tart permutation, and this year, there’s about three or four on store shelves. Since I’m not a real big fan of gingerbread flavored anything, I decided to pick up a box of Marshmallow Hot Chocolate ‘Tarts instead.


The packaging here is pretty rudimentary, and unlike the Spookalicious Pop-Tarts that now seem to be an annual offering, the folks over at Pop Tart, Inc. didn’t even have the common decency to give us something funky to cut out and play with on the back of these Christmas pastries boxes. When breakfast treat competitors are giving us cardboard cutout Cap’n Crunch Jack O’ Lantern stencils, you KNOW you’ve got to do better than this.


As far as the gustatory and textural quality of the pastries, I’d say they’re pretty passable. Any long time Tart enthusiast, however, will quickly realize that this things taste a LOT like the S’mores-flavored Pop-Tarts that have been out on store shelves for years, which has me thinking that this seasonal products are really nothing more than full-time products simply painted in a different glaze of icing.


This year, I think there are officially more M&Ms variations out there than there are states in the union, and of the five million permutation out there, I reckoned that these peppermint flavored editions were the most “limited” AND “seasonal” sounding of the bunch.


I don’t think the overpowering scent of these candies can be adequately described in the English language. As soon as you open up the bag, your olfactory glands get gangbanged by a nuclear waft of peppermint, a scent I would say is comparable to the odor of an exploded spearmint chewing gum factory. Seriously, you could open up a bag of these, set them out on a table, and a good 99 percent of the earthly population will think you just dumped a bag of potpourri all over the carpet.



It’s really hard to describe the experience of eating these things. When you get down to the chemical nuts and bolts, these things, technically, taste like regular M&Ms, but the overpowering scent kinda’ tricks your brain into thinking you’re swallowing a ladle of undiluted cinnamon. You know how those candy corn M&Ms made me kinda’ nauseous back in October? Well, these things did pretty much the same thing, leading me to assume that at least SOME of the additives Mars is putting in these things are unfit for human consumption.


It seems like every major holiday, Little Debbie releases about quintillion metric tons of seasonal mass-market baked goods, and this Christmas is no exception. Try going through the dessert section of your local grocery store, and tabulating ALL of the holiday-themed cookies, brownies and crackers with the Little Debbie stamp on them. If your calculator doesn’t have exponential notation, I don’t think you can.


So, yeah, Christmas Tree brownies. They’re brownies, only painted with green icing, and dotted with multicolored “ornaments” of sugar. And unlike Reese’s hilarious attempt to reproduce nature in dessert form, Little Debbie’s trees actually resemble by-god plant life.


All in all, I’d say these treats are pretty good. Despite the funky colored icing, it tastes like pretty much every other snack cake you’ve ever had before, which is more of a positive than a negative. And I really liked the design of the brownies, too; I imagine if you had enough free time, you could have a hoot and half painting the uncolored side of these things to resemble the spaceship from “Galaga” - which, pending these things get released in 2013, I can almost guarantee you a 115 percent that I will be doing this time next year.


And lastly, we come to Winter Oreos, which are one of numerous Christmasy-ish variations released by Nabisco this holiday season. The packaging promises us both red crème and FOUR fun winter shapes, but does it actually deliver?


Well, before I give you my take, let me address how much I HATE these newfangled Oreo packages. You see, instead of being normal packages, they want you to lift open the packages, which sounds pretty agreeable until you realize that it makes it EXTREMELY difficult to rattle out the cookies on the far ends of the rows. So unless you have fingers like Gollum, you pretty much HAVE to shake the hell out of the package to dislodge all of the goodies within.


So, the candies themselves? Well, sure enough, you get four different imprints - which, last time I checked, aren’t the same thing as shapes, but what the hell over, and yes, the filling is indeed red and gory looking. The catch here is, the cookies are completely the same as the normal Oreos, only with different stampings and icing the color of menstrual fluid as opposed to mayonnaise. Don’t get me wrong, it’s still pretty yummy, but what the heck, Nabisco! How hard would it have been to flavor that crimson junk like cranberries or something?


With 2013 upon us, I guess these things won’t be on store shelves for too much longer. I wouldn’t say anything I’ve reviewed today is worth going out of your way to experience, but any human being that can say no to peanut butter cups and toaster pastries shaped like bushes and flavored like cappuccinos is somebody I wouldn’t want to be commingling with around the holidays, anyway. Good, bad, it really doesn’t matter; like the Christmas spirit itself, these things are here today, gone tomorrow and a distant memory eight months from now. Enjoy ’em while you can folks; after all, it’s only 364 shopping days left until NEXT Christmas, you know…

Thursday, December 20, 2012

Jimbo Goes to the Movies: “The Hobbit: An Unexpected Journey” Review

The long-awaited prequel to the “Lord of the Rings” trilogy is here and…well, it isn’t as good as the first three movies, but it’s still kinda’ all right, I guess. 



The Hobbit: An Unexpected Journey (2012)
Director: Peter Jackson
Runtime: 169 minutes
Rating: PG-13 

Well, I caught “The Hobbit” last weekend, and while I didn’t really dislike it, I can’t say that I was a huge fan of the flick, either.

The funny thing is, I actually read “The Hobbit” over the summer - a really old copy, too, with an airbrushed cover featuring a Gollum that somewhat resembled a Martian vampire stalking Bilbo Baggins (as played by a chubby Martin Short, of course.) And while I’m no expert on Tolkien lore, I’m pretty sure that a good 80 percent of the shit that went down in Peter Jackson’s film adaptation WASN’T in the book that I read back in July.

Let’s start off with the obvious thing here, which is the length. By and large, “The Hobbit” is a pretty short read, and you could probably adequately cover the entire thing in a flick that’s a little under two and a half hours. Well, “An Unexpected Journey” clocks in at a little under three hours, and what do you know? Inexplicably, Jackson and company have decided to turn Tolkien’s 200 page prequel into a full blown trilogy, meaning that you could probably read the entire book TWICE in the same amount of time it takes to watch the first two movies.

Now, I’m not saying the movie is necessarily formulaic, but there’s really nothing here that you didn’t see in the first three movies. While “The Hobbit” as a literary foray was a little more subdued than Tolkien’s “Totally-Not-At-All-A-Parable-For-World-War-II-Not-Even-Remotely” series, Jackson and crew decided to stick with what made ‘em rich and Oscar-y the first time around, so if you’re expecting a blithe, lyrical romp, you’ll probably think otherwise around the third all-out ogre war sequence.

For the most part, the film kinda’ stays close to the source material, in the fact that, yes, most of the characters are here, alongside tons of “Lord of the Rings” mainstays that just HAD to get a cameo in this one (despite not appearing at all in Tolkien’s “Hobbit.”)

If you want action, than yeah, you’re going to get you some action, all right. Decapitated heads fly through the air, ogres get stabbed a bajillion times like bosses out of “Ninja Gaiden,” and at one point, a gargantuan frog-jowled king (which didn’t remind me of ANYTHING out of “Super Mario Bros. 2") has his stomach heroically slit open by one of the eight zillion protagonists, whom all share proclivities for eating cheese, drinking excessively, smoking what is probably weed and killing the hell out of a lot of demon monsters that, if I didn’t know any better, probably represented J.R.R. Tolkien’s subconscious fear of black people.


Thematically, it’s been-there-done-that territory, but the acting is pretty decent (despite some fairly hammy lines, including, if you can believe it, a testicle joke anchored around the game of croquet) and the special effects are downright spectacular. And because I’m not allowed to enjoy anything at face value, I like the fact that the movie is really secretly anti-Semitic, with the dwarves serving as stand-ins for Jews.

So, things I liked about the movie: I don’t know where in the hell it comes from (it certainly wasn’t in MY copy of “The Hobbit), but the part where the giant rock monsters throw rocks at each other? Well, that was pretty cool. I also liked the fact that they went all “Jaws” on us and never really show us what Smaug (the gold-loving dragon antagonist of the book and soon-to-be-trilogy) looks like. And also, it gives un unexpectedly heavy handed environmental subtext, with everybody in the movie running around talking about how “Smaug” is such a threat to modern society. And of course, there’s Gollum, who steals the movie, as expected. And along that same vein, has anybody else noticed just how much Gollum looks like an anthropomorphic version of Ren Hoek from “The Ren & Stimpy Show?”

Admittedly, I’ve never really been a big fan of the whole fantasy/sorcery genre (well, outside of playing “Golden Axe 3” on the Sega Genesis, anyway), but the movie - for all of its excesses - remains pretty fun and enjoyable, and even though it drags on for a couple of weeks, it never gets too dull at any one spot in the picture.

Is “The Hobbit” sure-fire Oscar-bait this time around? Absolutely not. Is it worthy of a “Best of 2012” list? Once again, I would say no. But, if you’re looking for a decent way to kill off half an evening, and you don’t mind having to stare at Ian McKellen’s alligator purse face for upwards of 40 minutes at a time, you probably won’t hate it.

And hey, did I mention that the entire Tolkien mythos may or may not be unintentionally racist?

Wednesday, December 19, 2012

Why The 2013 “Evil Dead” Remake is Destined to Suck…HARD.

Five reasons why the upcoming remake/retread is bound to be awful…


I’ve been meaning to write about next year’s “Evil Dead” remake since I saw the launch of the red-band trailer last Halloween, but I haven’t been able to for one primary reason: because every time I think about this upcoming abomination of celluloid, I become too angry to link up coherent sentences in my head. I’ve actually tried to sit down and iron out a blog post about it a couple of times, but during every attempt, I had to slam my laptop shut in disgust and just walk away. Not dwelling on the subject at all, I suppose, may end up saving me a couple of fist-shaped holes in the drywall.

I’ve gone on record about a billion times regarding my adulation for “The Evil Dead.” I never get tired of telling people about my quixotic quest to find a VHS copy back in the day, after reading blurbs in the Leonard Maltin film book about how disgusting it was. I ended up finally scoring a copy at the downright skuzziest mom and pop in town - they didn’t even have box art for the movie, it was just a piece of lumpy Styrofoam with the words “Evil Dead” written on it and a Roman numeral “I” positioned underneath it. The video copy itself was absolutely horrible - if I didn’t know any better, I would say it was a copy recorded off HBO sometime in the early 1990s - but as soon as I jammed that little rectangle of evil in my VCR, I just knew my life would never be the same.

I’ve written so much about “The Evil Dead,” and it’s influence on middle-school me, that it’s just redundant to mull the same-old stuff for the one quintillionth time. What I will re-state, however, is threefold: a.) “The Evil Dead” remains arguably my all-time favorite movie, in any genre, b.) it inspired me to pursue a career in the arts (because after watching it, I was convinced ANYBODY with a camera, pluck and a lengthy set of ideas could make exquisite trash cinema) and c.) it instilled inside me a sense of DIY ethos, an independent spirit, if you will, that found not only quality, but virtue, in making low-budget, un-financed projects.

Which, of course, brings us to the subject of the “Evil Dead” remake, a big-budget do-over scheduled for a national, theatrical release next spring. Seeing the much-ballyhooed/criticized trailer earlier this year was a downright shock to me, primarily, because I had no idea the long-discussed remake was in post-production, let alone already filmed. And needless to say…I was not impressed by what I saw.

Now, I haven’t seen the “Evil Dead” remake, and unless it cures all diseases known to man, I won’t be seeing it, either. In some ways, you could attack me for decrying something that I haven’t even encountered yet, or, you could look at me as a guy with a good sense of depth perception that knows when we’re about to hit a brick wall of bullshit at 90 kilos per hour. The writing, as they say, is clearly on the wall here, and I’d venture to guess that there’s a 99.999999999989 percent chance the “Remade Dead” is going to not only suck, but suck hard enough to cause a magnetic field reversal and end humanity as we know it.

So, why do I think the movie is going to blow, and magnificently? Well, here’s five iron-cast reasons as to why the “Evil Dead” remake simply cannot succeed as a cinematic retread…

REASON NUMBER ONE:
It’s Not a Product of the Same Conditions as the Original 

The production process of “The Evil Dead” is a part of Gen Y lore - in fact, you could even say that the success of the project is pretty much the quintessential Millennial fantasy made flesh. A bunch of Michigan State dropouts spend their entire college stay making Three Stooges homages instead of studying for biology class, and after making the greatest backyard horror movie of all time, manage to cajole some financer to give them moolah to make this insanely violent, hyper-original horror movie for roughly the same amount of money today that would earn you a new SUV.  Being the professional sorts they are, the filming process took over FIVE years, with Raimi, Tapert and Campbell doing special effects in their grandmothers’ attic and having to film outdoor scenes inside of garages with barely enough walking room for more than two people. And then, there’s the urban legends about the sheer torture the crew had to go through to make the flick - but yeah, I’m sure you’re sick of hearing about those by now, anyway.

Some people say the inherent “cheapness” of the original “Dead” was what gave it its aesthetic charm, but if you ask me, it’s what made the movie work in its totality. Had the makers of the film had an actual budget to work with, the entire spirit of the flick would’ve been different. It was a team effort, constructed by a ragtag ensemble of highly passionate (and highly broke) individuals that were willing to substitute unbridled creativity for financial backing. The process behind “The Evil Dead” simply cannot be replicated again, and as a result, I think it’s highly, highly improbable that a re-do under more favorable economic conditions can result in the same highly original, highly energetic and highly entertaining product.

I guess you could say that the heart of the matter here is that “The Evil Dead” remake just doesn’t have the same contextual significance than the original had. The original “Dead” came out during the height of the American degenerate cinema Renaissance, at a time when independent films still had distribution options and the term “CGI” was completely non-existent. The original “Dead” was something of a counter-culture, punk rock horror flick that was reactionary to the multi-million dollar, kid-friendly “horror” flicks of the time, like “Poltergeist” and “The Amityville Horror.” It brought the genre back into the gutter where it rightfully belonged, only infused with a sort of DIY creativity that made it stand out from the million billion slasher flicks of the timeframe. The stunning camerawork in the film was basically Sam’s means of “covering-up” the fact that the crew didn’t have enough money to film things conventionally; the movie was a lot of fun, no doubt, but it also carried a pretty palpable amount of “F U” to the industrial movie complex of the early ‘80s.

As a $14 million film with CGI effects using SAG actors and actresses, that same independent spirit just isn’t there for the “Evil Remake.” It’s not a labor of love and fury made by starving youngsters the same way the original was, and the film will almost assuredly lack the somewhat political animosity that the original carried against the Hollywood system of horror. Simply put: how are we supposed to expect the same product when the process is complete anathema to the system that made the original offering to begin with?

REASON NUMBER TWO:
It’s Being Helmed by a Totally Inexperienced Director (and a Screenwriter We Know for a Fact SUCKS)

The director hand-picked by Sam Raimi for the project is a dude named Fede Alvarez. The movie is not only his first mainstream feature film, but his first feature film EVER; take a good look at his resume, and you’ll note that the dude’s entire filmography up until the “Evil Retread” were nothing but short films (and three quarters of his works were made BEFORE he turned 18, at that.)

You can check out his last “movie,” a 2009 iMovie called “Panic Attack,” and honestly?  I’m not seeing what Raimi sees in this kid. Yeah, he’s good with Final Cut Pro, but as far as technique, and originality, and the ability to produce anything beyond an aesthetically interesting product? Not only has this Alvarez tyke NOT proven that he can make films with depth, he hasn’t even proven that he can make actual films, period. Hell, even Raimi and Co. had at least feature film under their belt before getting the green light for “The Evil Dead.” For that matter, the beta project the kids used to get funding for “The Evil Dead” was a lengthier, more nuanced production than anything Alvarez has ever handled. It’s one part of a dyad of unavoidable suck, which goes from pessimistic to downright hopeless once you figure out who’s beyond the remake’s script: Diablo Cody.

OK, so the script is actually credited to a four man team involving Fede, Sam, some guy that works with Fede and Cody, but just having the “Juno” scribe as a part of the parallelogram is like some sort of furtive warning that pure suck is ahead of us. Can Cody make a straight-up film as opposed to a cross-referential hodgepodge of pop cultural name checks? Well, if you’ve ever seen anything she’s written, you know that’s simply impossible. Hell, we already LET her take a stab at “real horror,” and the result ended up being a tremendously horrible train wreck noteworthy simply because of a pointless scene in which Megan Fox and her toe thumbs play tongue lacrosse with that huge-eyed chick from “Red Riding Hood.” Odds are, the Necronomicon will have at least one New Kids on the Block sticker placed on it somewhere, because, you know, it’s all kooky and kitschy and stuff.

And for all you poor souls that think that having Raimi on board as a writer is enough to save the movie from utter crappiness, just remember: this is the same dude that penned such cinematic wonders as “Easy Wheels,” “M.A.N.T.I.S: The T.V. Movie” and, gulp, “Spider-Man 3.” Like several tons of fertilizer, the net outcome here could be a thunderous explosion of shit, no doubt.

Reason Number Three:
Seriously…Why is Everything So Green? 

A lot of things struck me about the trailer for the remake, but after watching it two or three times, the only thing I could think about was “Jeez, why is everything so guldarn GREEN?”

Go ahead, watch it. You have my permission…to have your eyes blinded by what appears to be celluloid soaked in lime Jell-O. I don’t know if this Fede kid is trying to make some sort of abstruse environmental message with the movie or what, but this much is incontestable: if the hard R-violence doesn’t make your stomach churn, the monochrome, Game Boy-esque cinematography of the movie might just be enough to have you dry heaving in theaters.

Reason Number Four:
There’s No Way it Will Have the Same Impact as its Inspiration

When “The Evil Dead” was originally released, nobody was praising its premise as being original; five kids go into the woods, and they DON’T come out of them with the same number of arms and legs they used to. Even in the late 1970s, it was a done-to-death convention, and after a billion-trillion “Evil Dead” rip-o…err, homages…like “Cabin in the Woods” and “Cabin Fever,” the narrative hook seems even more hackneyed and lifeless today.

“The Evil Dead” had a profound effect when it was initially released, because at the time, IT WAS something different. Slasher flicks were a dime a dozen, and a super-gory, no-budget supernatural splatter flick was pretty much anathema to U.S. horror at the time. It took a standard template, amped it up to a billion, cut no corners, and drove home its point with highly innovative camerawork, an aesthetic and pacing model that was totally contra to everything else out there at the time, and oh yeah! It was violent as all shit, too. Watching people have pencils jammed through their ankles and sexually assaulted by tree limbs was the kind of stuff audiences just weren’t used to circa 1983 - today, however, it’s pretty much cinema du jour, thanks to all of those lame torture porn flicks like “Saw” and “Hostel” and “Madea’s Witness Protection.”

The standard of shock has been raised dramatically since then, and since we already KNOW what gimmicks the movie is going to use (it is a remake, after all), then how in the bluest of hells is the flick supposed to have even a modicum of surprise or innovation?

Reason Number Five:
How Could they Possibly Make it Live Up to the Original? 

While there have been some decent horror remakes over the years, here’s a litmus test for you: just how many remakes have you seen, over the last fifteen years, that were BETTER than the original movies?

Go ahead, hit up Wikipedia, and try your best.

And as a final note, I would just like to parrot what one of my friends recently said on the matter, who also considered the upcoming remake to be utterly pointless.

“Besides, there’s already a great remake of ‘The Evil Dead out there,’” he said to me.

“It’s called 'Evil Dead 2.'”

Sunday, December 16, 2012

More Guns? Then Expect More Gun Deaths as a Result (And Other Examples of Logic Completely Rejected by Second-Amendment Enthusiasts)

Why the Gun Lobby’s Claims that Concealed Weapons Laws Prevent Mass Shootings Are Utterly Unfounded


Over the years, I’ve concluded that there are just two types of people you cannot reasonably argue with. One, of course, are hyper-religious folks, predominantly of the evangelical Christian variety. The other? Hardcore gun lobbyists and advocates.

A lot of times, you hear about their “argument” against gun control, which to me, is a complete and total misnomer, because, simply put, the pro-gun lobby doesn’t HAVE an argument to speak of. Their perpetual war cry is a mixture of tautology and metaphysics, a two-pronged, reason-immune platitude that’s so circular, it almost resembles an infinity sign.

No matter what, the NRA lobbyists say the same thing, over and over again: “The problem with gun violence is that we just don’t have ENOUGH guns.” That’s their default solution, a completely logic-proof equation that they will NEVER change their minds about. You see, they think that the only way to combat gun violence is to make it so that people have greater access to guns, so that they can possibly shoot people that may embark upon a murderous rampage. Prevention, they think, shouldn’t begin until at least ONE bullet gets embedded in someone’s skull, and any measures to combat gun violence before that is constitutional rape.

Just two days before the Newtown Massacre, a dude over at Gunowners.org named Erich Pratt published an article called “Sadly, Another Gun-Free Zone ‘Success’ Story,” which was written in response to an Oregon mall shooting earlier that week. Pratt’s “argument” is just about the best example of the anti-gun-control mantra that I can possibly imagine.

“The lesson is clear,” he writes. “Good guys with guns save lives. And while bad guys may be evil, they are not stupid. They don’t typically target gun stores or police stations to perpetrate their crimes. No, they consciously select areas where their victims are disarmed by law.”

Needless to say, there are just a few, peculiarities, with Pratt’s perspective (which I will get back to in just a bit), but where things get interesting is when he lists a number of shooting incidents in which he cites armed citizens to have prevented  mass violence from unfurling.

His first two examples should give you a pretty good idea of how iron tight his argument is, as he cites instances in which armed robberies - which, last time I checked, aren’t the same things as “mass shootings” in terms of motives and criminal sociology - were halted by senior citizens just lugging around firearms.

Pratt then goes on to mention both the 2007 Trolley Square Mall Salt Lake City massacre and a 2007 Colorado Springs church shooting as incidents where citizens packing heat prevented “mass fatalities.” What Pratt doesn’t tell you is that in the Salt Lake City incident, the man that fatally shot the gunman was a policeman, and that in the Colorado Springs case, the individual that shot the gunman was an ex-police officer serving as official security personnel at the church. And even then, she didn’t mortally wound the attacker, and couldn’t prevent the shooting deaths of two churchgoers, much in the same way the SLC police force didn’t prevent the gunman from killing five others before being brought down.

While NRA lobbyists go on and on about how Concealed Weapons Carry permits are the only way to deter mass shootings, one of the things they tend to under-publicize is the fact that 49 states ALREADY have concealed weapons carry laws, which give individuals the right to lug around firearms anywhere where metal detectors aren’t, and of those, only 16 have state legislation that gives private businesses the ability to opt-out - rather counter-intuitively, I might add, since such businesses are required by law to post “gun-free zone” signage, which, by the tautology of open-carry proponents, MAKES them more likely “targets” for mass shootings.

A relatively sparse entry on Wikipedia lists reported “defensive gun use incidents," primarily a list of robberies and home invasions “thwarted” by individuals that pulled firearms on the assailants (and try not to LOL too hard, but in at least three of said incidents, said attackers just so happened to be escaped zoo animals.)

By comparison, here’s a list of mass shooting incidents that have transpired in the U.S from July 2012 until LATE October 2012 ALONE.

And then, there’s some other data that doesn’t really bode well for the “more guns, in more places, equals less death” argument.

Houses with firearms are a staggering 22 times likelier to experience suicides, seven percent likelier to experience homicides and four times likelier to experience an unintentional shooting death or injury than homes without guns. A 2009 study finds that individuals possessing guns are about 4.5 times likelier than people without guns to be shot in an assault. And while the FBI estimates that about 200 legally justifiable self-defense homicides were enacted by private citizens with firearms in 2008, the CDC say that about 31,000 people total were still gunned down in 2009. That means for every justifiable self-defense homicide that happens in the U.S., guns are involved in roughly 155 fatal shootings that aren't.

In a total disregard for how mathematics work, anti-gun-control lobbyists still insist that open carry laws are the ONLY way to prevent gun violence, when the statistics clearly indicate that, somehow, the presence of more firearms mysteriously results in more instances of firearms-related violence.

There’s probably some irony to the fact that gun lobbyists are ALWAYS droning on and on about their Second Amendment freedoms, when many of the policies they advocate are clear cut violations of the Fourteenth Amendment’s Due Process clause.  As much as they moan and whine about gun licensure (not really a problem in the U.S., since almost half of gun purchases are made on second-hand markets), they don’t seem to pick up on the fact that open carry laws are impositions on private businesses and individuals at public functions. Never mind the logical fallacy, the very fact that gun owners are demanding that they be allowed to bring firearms into privately held businesses and forums is, in and of itself, unconstitutional and a major example of special interests backed statism. Let us mince no words about what open carry proponents are actually advocating: the use of government force to MAKE individuals allow people to bring potentially lethal objects into private and publicly-held forums, regardless of the consent of the private parties or public participants.

If I didn’t know any better, it seems to me that the gun lobbyists are now vouching for something of a paramilitary state in which kindergartner teachers are armed with handguns and surgeons conduct operations with semi-automatic weapons at their hip. Permitting lethal weapons in schools and hospitals and bars that serve alcohol will actually reduce the likelihood of violence occurring at said establishment, they say, as self-ordained defenders of the public “good” (recall Pratt’s quote from earlier, with his abstract talk of who “good guys” and “bad guys” are) will surely only use such weapons for defensive purposes (although the data clearly indicates that said individual is only about 155 times likelier to use it in a non-defensive act, however.)

I once saw a bumper sticker that said that blaming guns for murder is like blaming pencils for handwriting errors. Of course, there’s also that famous maxim, “guns don’t kill people, people kill people.” While it is true that guns aren’t the essence as to why murders take place, the statistical reality here is that firearms make murders much more efficient and likely to produce fatal outcomes. The very same day 20 children were shot to death in Connecticut, some dude in China went on a rampage, stabbing an almost equal number of children. The crimes were almost staggeringly similar, with two exceptions: the methods of attack, and the ultimate outcomes. In one case, a knife was used, and in the other, firearms. One event produced zero fatalities (and only a few severe injuries) while another resulted in the cold blooded murder of 26 individuals. And of course, the pro-gun folks respond as they always do: “Well, if them teachers had guns, nobody would’ve got stabbed in the first place.”

By the way, I’m not what you would call an individual that’s in favor of “banning” guns outright, either. Despite the incessant fears of many an NRA member, I don’t think I’ve ever encountered a SINGLE gun control advocate that said he or she was in favor of totally disarming citizens. And the less said about those fringe lunatics that think mass shootings are “false flag” operations as part of some mass de-arming campaign, the better.

What I do think is important, especially in the wake of the Newtown tragedy, is an honest to goodness debate about gun policies in the U.S. For far too long, the entire national debate has been framed by a special interests group numbering a paltry five million individuals, whose lobbying ties are so closely connected to conservative legislators that House chairs have consciously avoided things like “FBI data” and “CDC statistics” that prove beyond a shadow of a doubt that the “more guns, less gun deaths” argument is complete and utter hokum.

Why not make gun owners in the U.S. take exams the same way we make motorists re-up for driver’s licenses, anyway? To operate semi-trucks, and especially trucks containing hazardous materials, drivers generally have to pass a litany of tests, exams and receive constantly updated certifications. The logic there I think is clearly applicable to firearms owners; if you’re going to be entrusted with potentially lethal devices, and you’re going to be in the public sphere, maybe it’s for society’s best if we make sure you’re not a sociopath before we hand you the Desert Eagle. And if guns aren’t culpable for mass mayhem, then why don’t gun lobbyists and advocates put their money where their mouths are and pump more dollars into psychological testing of school children, mental health care services and public security infrastructure?

If you’re looking for a response, you’re not going to find one, sadly, other than the constant refrain of  “more guns prevent gun deaths from happening” -- an utterance, I think, that’s a little hard to pick up over the sounds of five year old children being shot to death.

Friday, December 14, 2012

How to SCIENTIFICALLY Avoid a Crappy Life...

In four neurologically proven (albeit, not-so-simple) steps…


Life, as we all know, isn’t exactly easy. We have to deal with nonstop external issues, which make our already-tense internal issues fifty times worse. We’ve got interpersonal problems, we’ve got intrapersonal problems, we’ve got problems on the social level and we’ve got problems on the spiritual level. And just about every neurologist worth his PhD will tell you the same thing:

It all goes back to upbringing, people.

The term “upbringing” is tossed around all the time. As parents, mom and dad are expected to mold little Susie or Johnny into a decent-ish human being, so that the fruit of their loins turns into a college-educated non-drug addicted, workforce-ready android at exactly 22 years of age. After that, you’re officially considered an adult, and you can do whatever the hell you want, because society has deemed you responsible enough to go out there and buy cars and houses and fill out people’s paperwork and all that stuff.

The thing is, people look at “upbringing” as strictly a form of social development - that means, being instilled with personal and cultural values so that little snot-nosed rug rat that pukes on the carpet can become the CEO of a Fortune 500 company someday. The thing is, “growing up” probably has WAY more to do with one’s neurology than his or her “sociological” rearing. Where things get extremely interesting is when you evaluate the whole biological/social schism, and realize that the two are so closely intertwined that it’s next-to-impossible to imagine a human being becoming anything without looking at the external and the internal wrapping around each other like that swirling candy cane pattern on a barbershop pole.

Modern brain science has pretty much TOLD US that there are four things that ultimately determine whether or not a child will turn into a functional member of society or the next Richard Ramirez. Forget all of that psycho-babble Piaget nonsense, the key to shaping a human being into a decent cultural inhabitant has EVERYTHING to do with hitting key developmental points SANS external impediments. In other words? Neither “nature” nor “nurture” play a greater role in determining what kind of human being an individual becomes - instead, it’s how one is nurtured at certain junctures in natural development that ends up making a person, for all intents and purposes, who they end up being for the rest of their lives.

So, let’s say you’re an expecting parent. You really want to make sure your offspring will make you proud someday, and you’ve been reading all sorts of developmental psychology stuff to figure out how to turn your kid into a model citizen. Soon to be mamas and papas, let me save you some time and just quickly run down the four, scientifically-validated things one has to do to insure that his or her child DOESN’T grow up to be an unabashed failure.

Want to avoid having a crappy life? Simple, dear readers: just follow these four scientifically backed steps, and you’ll be on the path to excellence in no time…

STEP ONE: 
Have wealthy parents (or at the very least,  have parents that try to make your home life as least chaotic for you as humanly imaginable)

Now, I know what you’re thinking. “Wait a minute, you mean to tell me that the first step to having a non-sucky life is to have well-off parents? You mean, that thing I have absolutely zero control over?” In reply? “Yeah, and it’s a whole lot more important than you’d think.”

This much is absolutely undeniable - parents with higher socioeconomic status tend to have children that, on average, are more intelligent than the children of parents with lesser socioeconomic status. From a sociological perspective, it’s really a “no shit, Sherlock” argument: of course parents with more money have more intelligent kids, because they can afford to send them to early education programs and enroll them in better schools and hire tutors and all that jazz, while poor ass kids (and trust me, I was one of them) had to make do with whatever scant materials the under-funded local school district had to work with. But in that, there’s also a surprising neurological angle that very few people are aware of.

Check out this study released earlier this year. Researchers say that the reason the children of rich kids end up smarter than poor kids has very little to do with socialization, and a whole hell of a lot more to do with neurological development. They found that the hippocampus - the part of the brain that’s vital for things like memorization and learning - were able to hold a much larger “volume” than kids from lower income homes. Additionally, they found that the amygdalae of children with richer parents held a much smaller “volume” than those of poorer kids. The amygdalae, by the way, is the part of the brain that processes what “stress” is - in short, the children of more educated, wealthier parents are genetically “designed” to better cope with stressors and retain information than lower class tykes.

Oh, and it gets worse from there. You see, children that experience stressors from 0-5 years of age aren’t just emotionally stunted, but neurologically damaged in the process. If “toxic stress” levels climb too high for children, scientists say neuroconnectors in their brains are permanently “shorted.” The end result? Higher levels of drug abuse, depression and even shittier overall health later in life.

The secret science here is that children of wealthy parents aren’t smarter BECAUSE their parents are educated and have money, but because their moms and dads are able to provide more stable home lives and earlier educational opportunities, which in turn KEEP their kids' brains from becoming whirlpools of pre-school angst. Feasibly, a lower income child can became an intelligent individual despite their parents’ meager income, pending ma and pa do the best they can to keep the household peaceful and devoid of too many obstacles to the well-being of their child’s mental health.

Which brings us to the topic of broken homes. You know what, I’ll just let you read the data for yourself. Clearly, if being in a poor household with TWO parents generally puts people at an early neurological disadvantage, growing up with just a parent tends to put people on the fast track to G.E.D.-ville in a hurry.

BUT, there seems to be one major weak spot regarding this “socioeconomic/neurological” barrier. Regardless of parental income or education status, evidence suggests that children that learn to read at earlier ages generally end up with greater cognitive abilities than their peers - meaning that the quicker your kid becomes literate, the more likely he or she can compensate for whatever neurological shortcomings he or she may have encountered earlier in his or her development.

STEP TWO: 
Learn as much about the social sciences as you can before entering high school 

One of the great unpublicized realities of neurology is that our intelligence levels - for all intents and purposes - “crystallize” before we enter the 9th grade. Long story short, how intelligent you are compared to your peers at age 14 is generally how intelligent you will be compared to your peers throughout life. That means in between the ages of 6 and 13, it’s time to get your ass seriously cultured.

“Intelligence” is a pretty difficult thing to gauge. No matter how you define “smart,” a few general attributes are common when determining whether or not an individual is above average; memorization, spatial skills, and ability to multi-task, among others. The “problem” here is that people generally look at intelligence as a basic evaluation of STEM capabilities, and that’s it. Going back to that whole left brain/right brain dynamic we’ve heard about a billion times now, students that score astoundingly high in math and science generally have a more difficult time handling the humanities than the average student. Put a top 10 percentile engineering student or computer scientist in a sociology or philosophy class, and watch Mr. Smarty Pants go from teacher’s pet to dumbfounded like that.

You know how earlier, we were talking about richer kids being more soundly “wired” neurologically than poorer kids? Well, that’s true, but as we all know, neurological ability doesn’t necessarily mean one is generally better at adapting or applying him or herself. The reality is, a lot of professions that we think of as being the domain of eggheads - math, science, technology, engineering and economics - are all quantitative fields that are more accessible to individuals with rudimentary intelligence. Their neurons process information faster, and their heads can store greater amounts of data. They have the ability to memorize complex data sets and equations, and have no problem evaluating complex systems - just as long as those complex systems have a certain, numerical value and a definitive solution set.

Now, let me turn your attention towards an entirely different kind of intelligence - social intelligence.

Like general intelligence, social intelligence is a little difficult to decisively nail down, but according to most theorists, one’s social intelligence encompasses several key things: one’s self-awareness, one’s awareness of society and social prompts, and one’s general understanding of the complexity of human behaviors. If general intelligence is a measurement of one’s quantitative abilities, than social intelligence is a measure or one’s qualitative abilities, how he or she develops social cognition, empathic accuracy and self-presentation and a whole host of other interpersonal skills.

While rudimentary intelligence can be improved through practice, the neurology is pretty damning; if a kid has more well-developed neuroconnectors, more hippocampus volume and less amygdale volume, of course he or she will be inherently better at quantitative studies than the average individual. Where pre-adolescents can score an upper hand is when parents place an emphasis on increasing the social intelligence of their children; that is, developing interpersonal skills and a greater knowledge of social systems and the complexity of self at an early age.

That means that, between the ages of 6 to 13, your kid needs as much social experiences as he or she can get. That means a ton of interaction and exposure to culture; as in, real culture, and not all of that bullshit on PBS. A lot of people think history, psychology and philosophy are “too intricate” for young kids, but I tend to disagree; think about the social advantage a pre-teen has when he KNOWS about social factors like groupthink, and Freudian transference and even the Marxist “base and superstructure” theory. Not only does it make for a more socially-attuned kid, I think it makes for a more well-informed, better-judging individual all around. The more a young kid knows about self and society, the more likely he or she will figure out how to best adapt and apply him or herself in diverse social environments - a highly desirable characteristic that even the most well-adjusted children of privilege oftentimes have major difficulties achieving.

STEP THREE:
For God’s sake, stay away from drugs and alcohol

This one is a no-brainer if there ever was one. The great thing here is, people seem to be completely unaware of how drug and alcohol use during adolescence hampers one’s neurological development - as in, permanently.

Let’s get the basic stuff out of the way early. If you need me to tell you that doing heroin or meth at 15 is bad for a teen’s neurological development, then congratulations on figuring out how to finally open a laptop. Secondly, the physiological effects, while relatively short-term, still have a major developmental influence on the neurological growth of young people.

Recently, researchers made a pretty startling find about the long term impact of chronic drug use among adolescents. Apparently, teens that use marijuana more than four times a week (and believe you me, there are indeed plenty of kids that do just that), end up having IQ scores around their late 30s that are about 8 points lower than the general population. Where things get particularly freaky is that the study suggests that the neurological impact of weed use during adolescence is enough to cause an “irreversible” toxic effect on one’s brain chemistry

The gist here? Doing drugs or imbibing alcohol - no matter which kinds we’re talking about, and no matter the quantity - is enough to royally screw up the last portions of one’s developing brain if he or she is under the age of 18. So if you’re thinking about chugging or druggin’ before you graduate high school, just remember - the total damage you’re looking at is a little bit lengthier than a morning hangover.

STEP FOUR:
Try not to be a complete and total shithead by the time you turn 25 

25, at first glance, seems to be a pretty arbitrary number. And also, the term “shithead” is completely and totally vague and unscientific, but give me a second, and I will fully explain why this is important.

There’s a part of the human brain called the prefrontal cortex. Basically, it’s the part of the brain that’s responsible for individual judgment - that is, the most strategic portion of the human mind. It allows you to weigh options and develop long-term plans and fully consider the consequences of one’s actions. And wouldn’t you know it, the average human’s prefrontal cortex doesn’t completely develop until…you guessed it…25.

The grim reality here is that once we hit 25, we’re basically done “developing” on the neurological level. While we may maintain our basic intelligence level and mental faculties from 25 until we hit middle age, by the time we are in our mid twenties, that’s it as far as who we are as developmental creatures. Our morals are basically concrete, and how we respond or react to environmental prompts is hardwired into our skulls. At 25, we are as socially and neurologically developed as we’re going to be, so from then on out, who we are then is pretty much who we’re going to be until the day we die (barring some major traumatic brain injury or an unfortunate case of full-blown Alzheimer’s, of course.)

With that info in mind, I suppose you can see the importance of preventing individuals from becoming shitheads before they hit their mid 20s. While social interventions can alter an individual’s conscientious perceptions in adolescence, by the time an individual reaches quarter-life crisis, it’s pretty darned difficult to “reverse” one’s prefrontal proclivities. Not only do the frontal lobes serve as  the “core” of one’s decision making, they also serve as the emotional hodgepodge where things like anger, sexual desire and compassion for others are “solidified” as one’s traits.

Now, that’s not to say that one’s prefrontal-driven characteristics can’t be altered, it’s just that it’s way, Way, WAYYY easier to do so when a person is younger and still developing mentally. That’s why it’s so much easier for an individual to learn multiple languages when he or she started as a kid than if he or she was to try and pick up a second tongue in college - your brain is still in the process of forming into its “permanent” shaping, and you still have a little developmental space left to acquire new skills and even strengthen things like neuroconnector speed. Along the same vein, that’s why so many criminal justice folks make an emphasis on rehabilitating offenders when they are still children - the “core” of who they are is still moldable, and a lot of those prefrontal patterns are still susceptible to alterations.

To make a long story short, by the time you are 25, your behavioral dispositions are pretty much solidified inside the complex circuitry of your brain. After that, you are more or less the individual that you will end up “being” for the rest of your life, and if you just so happen to be a complete and total shmuck at the big 2-5? Odds are, you’re going to be a complete and total shmuck forever.

Conclusion (or something kinda’ like that, I guess): 

For as long as human beings are going to be on this planet, the debate about biological determinism and social influences - and particularly, which of the two has a “greater pull” on one’s behavior, attitude and mentalities - is going to be, at best, circular, and at worst, completely fragmented.

In the pudding, we’re seeing some pretty substantial proof that both neurology and interpersonal experiences play might important roles in how people develop. In fact, it’s quite clear that the two are so closely related - with social experiences shaping physiology, and vice versa - that it seems a little stupid to separate the two factors as co-influencers on one’s mental health.

The problem is, the sectors of psychology and biology are so opposed to intermingling that it seems as if the two fields are intentionally trying to downplay the push-pull relationship between external factors (social prompts and personal experiences) and one’s physiological development. The reality is, BOTH influence how an individual develops, and until modern science - all branches, mind you - come to the consensus that “nature” and “nurture” are completely inseparable as psychological concepts, we’re just going to find ourselves chasing the invisible dragon until our shoes fall apart.

Now, as far as how this information should influence how one lives his or her life, well…I think it’s something thought we all ought to take into consideration as individuals. An understanding of the complexity of neurology would go a long way in truly explaining why people do what the end up doing, and in some capacity, believe what they believe and hold true what they hold true. The problem there, of course, is that such a perspective inevitably provides us a slippery slope into total determinism - meaning, if you don’t blaze past all four of these major neurological development impasses without a checkmark, it’s ultimately impossible for one to become a “completely” functional adult human. Which, in turn, brings me to my ultimate thesis.

While statistics give us a pretty good idea about general populations, it all goes out the window when analyzing people as individual human beings. Yeah, if you check off all four of these things in your path to adulthood, you will most likely be a pretty decent human being, but that doesn’t mean that you can’t skip a point or two and still turn out a pretty good person - if not one better than most of the individuals that can say they snuck past all four neurological checkpoints without incident. Similarly, I am almost certain that there are people out there that failed at all four neurological points that ended up becoming decent folks, they same way there are PLENTY of people that nailed all four that have to be among the worst human beings that have ever lived.

Summarily, if you want a good life, these four suggestions are most likely to put you in a position to succeed, based upon what we now know about neurological development and its impact on psyche. But, to guarantee that you will have a good life, there’s only one criterion that, at the end of the day, matters:

Just how willing are you, as an individual, to strive for one?

Wednesday, December 12, 2012

Why Being a Vegetarian Sucks

Four reasons to avoid being a vegetarian…from a vegetarian himself


About seven years ago, I made the decision to become a vegetarian. There were a lot of things that provoked me into giving up meat, ranging from the very commendable (as a preventative health measure - heart disease is more common in my family tree than high school degrees), to the less commendable (I thought it would be easier to keep weight off if I didn’t eat my regular six cheeseburgers a day) to the not at all commendable, in any way (I thought it would impress that one bisexual Portuguese chick I was working with at the time.) No matter the long-irrelevant rationale for my dietary choice, I’ve managed to stick to the whole “not-eating-things-that-used-to-have-faces” thing pretty well over the last half a decade. Sure, I’ve slipped up here and there and snuck a slice of pepperoni, and I think that sometime in 2009, I may have accidentally eaten a chicken Parmesan sub, but for the most part? I’ve been pretty darn dedicated to this whole “vegetarianism” deal.

Over the last couple of months, however, I’ve formerly denounced my prior “vegetarian” status and converted to the much, much more manageable religion of “pescatarianism” (which my spell checker automatically transforms into “sectarianism,” which is really all kinds of awesome.)

A lot of people have asked me why I made the sudden change in what I find philosophically agreeable to eat. There really wasn’t a detectable tipping point that made me throw up my hands and yell “that’s it, NO MORE SOY DOGS!”, as much as it was a cumulative decision over time. What sort of factors influenced my decision to abandon “vegetarianism,” you might ask? Well, here’s four reasons why being a vegetarian, unquestionably, sucks

Reason Number One:
Being a Vegetarian is Expensive

The next time you’re walking down the aisles of your favorite big-box-store, try taking a look at the “meat alternative” sections. Take a REAL good gander at the prices. Now, waddle over to the frozen food section, and look at how much REAL sausage, chicken nuggets and fish tacos cost. I guarantee you that, no matter where you live, the “synthetic” meat will a.) cost nearly TWICE as much as the “authentic” meat and b.) the quantity of the “authentic” meat will be AT LEAST twice as much as the wannabe-meats.

In an economic crisis, trust me, you take note of this shit. For the cost of a pack of tofu sausage links, I could pick up FOUR PACKS of hot dogs. That means, for the exact same price, I could get 40 wieners as opposed to just six “protein links.” And if you REALLY want to make steam come out of your ears, take a look at how much a microwave tofu casserole will cost you, compared to a Michelina’s Alfredo noodles box. Wolfgang Puck’s frozen veggie pizza is damn near eight dollars, while Tombstone’s pepperoni pie is less than half the amount. Heck, you can even pick up a box with both hot wings AND pizza for less than the veggie pie. If you wonder why so many vegetarians are stick thin, it’s not because of what they eat…it’s because their food costs so damn much, they can only eat it once a week.

Reason Number Two: 
Being a Vegetarian is Inconvenient 

Let’s say you and your buds are hanging out on a Friday night. It’s getting late, and all of you are hungry. They say, “hey, let’s all go to McDonalds!” and you’re all like, “yeah, awesome, too bad I can’t eat anything on the menu!” At that point, someone will almost assuredly make a joke about ordering a salad, and while all of your pals are bonding over special-sauce soaked angus beef, you’re stuck nibbling on a packet of stale fries and sucking on a milkshake that, ironically, probably has a horse in it somewhere.

It gets worse, folks. Imagine being a vegetarian, forced to find food at a theme park, or a movie theater, or a convenience store. Hardly any fast food restaurants offer tofu alternatives, and if they do? The clerks always scowl at you, for making them walk over and warm up the frozen “pseudo-burger” that nobody has even thought about ordering for the entire day. If you want to become a social outcast over the course of an afternoon, trying being a vegetarian standing in line at Wendy’s sometime.

Reason Number Three:
Being a Vegetarian is, Ironically, Detrimental to Your Well-Being 

OK, let me preface this one by saying, yes, there are plenty of health benefits to being a vegetarian. We all know that vegetarians tend to live longer than omnivores, and for the most part, they are in way, way better shape than the rest of society. Even I’m willing to admit it from experience: after five years of eating tofu tacos and black bean enchiladas, my cardiovascular health is better than it’s ever been in my life.

Now, as for the negatives: being a vegetarian means FORGET about ever having muscle mass. Trust me, I have tried, and I will be several shades of damned if I haven’t been able to gain an ounce of sinewy tissue since going strictly veggie. At one point, I even bought one of those $15 bag of whey protein powder to bulk up, and all it did was make my BM smell more flowery than it used to. I guess it’s not 100 percent impossible for someone to go vegetarian and gain muscle, but it’s totally unfeasible if you plan on having a 40 hour a week job and NOT living in the gym on the weekends. And let’s not even talk about the mental health implications of vegetarianism, which has been linked to elevated rates of depression, anxiety and somatoform disorders. And you thought developing “soy boobies” was the biggest risk associated with going vegetarian!

Reasons Number Four:
Being a Vegetarian ALWAYS Puts You in Awkward Social Situations

Nobody likes vegetarians. Omnivores make fun of us for having too limited a diet, while vegans berate us for having too vast a diet. People automatically assume that you’re a vegetarian for ethical reasons - you are all for animal rights, you have some political slight against the fast food industry, etc. - so when other hardliner vegetarians find out you’re an apolitical vegetarian, you end up being ostracized from your own peoples.

Imagine this scenario. You’re showing up at your girlfriend’s place for the first time, and her mom walks it with one of those plastic alligator smiles and drops a big, fat plate of meatloaf in front of you. It’s the family specialty, she says, something that’s been passed down through the families since the middle ages or some shit. You poke it with your fork, and you gently meep, “gee golly, this dish sure is swell and all, but since I’m a vegetarian, I can’t eat this thing that has such an emotional tie to who you are as an individual.”  Your girlfriend is embarrassed, the mom is visibly disappointed, and the patriarch is ready to stab you in the face with a butter knife. Now, repeat seven hundred times, and that’s pretty much du jour for college-aged me. Even if you somehow mange to obtain a philosophically edible meal, things get way too political as soon as someone notices you are a vegetarian. All you want to do is jam some beans down your throat, and all your table mates want to do is start World War 3 over the pros and cons of pork. It’s physically impossible to have a normal social experience when the auger of you being a vegetarian floats over the table like Jacob Marley’s ghost every time you pick up a spoon and a wad of napkins. Trust me…I know.

So, with all of this information taken into consideration, is vegetarianism still a wise call for most modern-day Americans? Well, as long as you have plenty of money, and don’t mind being unable to eat at 90 percent of restaurants or social events, or care about having muscle mass and the ability to lift anything that weighs over 20 pounds without passing out, or don’t mind having people launch projectiles at you during heated get-togethers, than, yeah, I’d say that going vegetarian isn’t that bad of a deal. But for everybody else? Whatever you do, don’t put down that corn dog