you have to evaluate my biases. •I’m a Ph.D-program dropout. (The polite term is “attriter,” but whatever. I burned out badly and left right before comps.) •Since then, and partly because of that experience, I’ve decided that the academic system of rewards and prestige is often misguided and unjustifiable, sometimes hilariously corrupt. •People do ask me “why don’t you pursue an LIS Ph.D?” Now you know. I want nothing to do with that system. •I’m okay with teaching in a professional program. I have no truck with doctoral education, and I prefer it that way. •But a fair few Ph.Ds have accused me of bitterness and bias. You decide.
scholarly reputation. •Mind you, even with that, a lot of things don’t make sense... •Trade publishing is a money economy. Scholarly publishing is a reputation/prestige economy stacked on top of an increasingly-precarious money economy.
in your field the moment you earn your Ph.D, you won’t get hired to be a professor to begin with. •Except perhaps as an “adjunct,” which basically means “heavily-abused freelancer.” •Difficulty extra-acute in the humanities; scientists tend to have more options. •If you get hired but don’t continue to build a reputation in your field, you lose your job. (Hold that thought.) •If your work is grant-funded, your reputation plays into whether you receive grants. •And if you’re expected to fund yourself mostly or solely on grants (hard sciences, often), you lose your job if you don’t. •Understand me well: academia is a HARSH ENVIRONMENT with ZERO tolerance for prestige-related failure.
record, and the ACADEMIC prestige of the venues where you published. •I will be adding nuance to this in the course of this lecture, but that’s the foundation. •In some fields, record of grant success. •Anything else is a remote consideration at best. •Does that mean teaching? Yep. Sure does. •(Caveat: “collegiality” is a thing, and a perceived lack of it can covertly torpedo an otherwise-acceptable academic. As you’d guess, this disproportionately harms women, people of color, queer people, and other already-marginalized populations.)
a lot don’t) •Job security for college professors •(flippancy aside, tenure is intended to protect academic freedom) •How it works, more or less: •Get hired into a “tenure-track” position. The tenure clock starts! •When the sands run out, present a research/teaching/service portfolio to your department and your university, along with letters of support. Research counts the most. •If they like your portfolio, you (theoretically, at least) have a job for life! If not, you’re fired... and while it’s not impossible to get another tenure-track job, it’s not exactly likely. •Future academic librarians: many academic libraries impose tenure requirements! •FIND OUT as you apply, definitely if you interview. If you’re hired, find out what tenure requirements are immediately!
some extent, university and department. •For the sciences: peer-reviewed journal articles •Data? Code? Not so much yet... but that is already changing. •For the humanities: one, perhaps two, sole-authored, peer-reviewed scholarly monographs. •Journal articles? Book chapters? Bleh. If you must. It’s declassé. •Co-authorship? Nnnnnnnope. •Ebooks? UNCLEAN! Print or it doesn’t count. No POD, either. •Digital projects? Someone will get the vapors... •(if you sense I have very little respect for this, you’re 100% right)
you work. So THEY OWN YOUR SOUL. WHAT THEY SAY, GOES. •“Publish print books!” “How many?” •For this reason, you need to discount What’s Happening In The Larger World as a guide to how tenure-track (“junior”) faculty will behave. •Juniors, no matter how forward-thinking they are, have no choice but to do what seniors tell them to. Even if the seniors haven’t poked their noses out of their offices since the last millennium. •This is why scholarly communication changes s-l-o-w-l-y.
diploma. •MLS or Ph.D, take your pick. •You’re lucky enough to have landed a tenure-track job. You want tenure, of course. •You have limited hours in the day. What activities do you prioritize? What do you cut back on? What else can you do to raise your chances? •I want you to feel some empathy here. Scholarly publication is often wrongheaded, but that’s because the reputation/prestige economy is also wrongheaded... and CUTTHROAT.
•Information literacy and reference folks heading for academic libraries: tired of explaining it yet? You will be. Oh, you will be. •But we need to have a serious discussion about peer review. It ain’t all it’s cracked up to be, sometimes. •Part of this is that we expect vastly too much from it. •Part of it is that peer reviewers are, you know, human. And make mistakes. And grind axes. •So here’s a start toward that discussion. •More is absolutely welcome on the class forums!
for a couple of people in the same field to do peer review. •Problems already! What if the editor doesn’t know the right people? Or they refuse? Or they’re feuding with the author? •Peer reviewers asked to return go/no-go decision and comments. •The exact criteria for go/no-go vary. Classically, it’s a function of (perceived) correctness of method and results, importance to the field (squidgy!), writing quality, and currency. •Turnaround time: ASAP. Rarely more than a few weeks. •Is that enough time to be sure? Really? •Certainly not enough time to redo the research (“replication”). •So, you know, the best-intentioned and most expert peer reviewers are still basically going with their gut.
work from being published •Are you comfortable with two readers and an editor catching every single bad book or article? Without ever accidentally (or worse, intentionally) rejecting a good one? •Yeah, no, I’m not either. This is a flatly ridiculous expectation! But it’s incredibly widespread, and too-often unquestioned. •Leads to the erroneous conclusion “if it didn’t get published, it must be bad.” The money economy can play into this too, especially for monographs! More on this later. •Improving good work •Good peer reviewers are worth their weight in diamonds. •This function, in my experience, tends to work pretty well!
WORK? •Nope. WE KNOW THIS. •Basically, it’s not a question of whether something gets published, but where (and the associated prestige of the venue). •Repeated publication hoaxes! Including in high-prestige venues! •And the correlation between important work (in hindsight) and prestige of publication venue is dubious at best, and breaking down further (hold that thought). •A LOT of retractions lately based on data error, data fraud. Guess what? Peer reviewers typically don’t even LOOK at data! •Bias in journals toward flashy headline-grabbing results •Which... uncomfortably often turn out to be flat wrong. •The details here are a bit beyond the scope of this course, but check my Pinboard “peerreview” tag for examples and discussion.
biased. WE KNOW THIS. •How do we know? Easy to test, actually! Send the same paper around, altering one factor in the attribution, and see what happens. •(There’s plenty enough subject overlap among journals that “poor fit with the journal” can’t explain the results.) •Reflects societal biases •Feminine names get harsher reviews, more rejections than masculine names. •Names suggesting a scholar of color get harsher reviews, more rejections than typically-Caucasian names. •Papers from colleges/universities with lower perceived prestige get harsher reviews, more rejections than (e.g.) Ivy League institutions. •Reviewer bias toward conservatism •“Revolutionary” or “current consensus disproving” results can be disbelieved or even intentionally suppressed by peer reviewers.
a bad cue, leave the names off, right? •“Single-blind” review: reviewers know author’s name, author doesn’t know reviewers. •“Double-blind” review: neither reviewers nor author know who’s who. •Both aimed at guarding against retaliation as well as bias. •Sometimes helps... but not enough. •Some fields are pretty small. Everybody knows what everybody’s doing. (I once recognized an author by topic and writing style in an article I was asked to review.) •It’s frequently possible to recognize an author by looking in the bibliography or lit review for self-citation. •Doesn’t fix conservatism bias. Or reviewer ax-grinding.
game peer review. It’s ugly... but it sometimes works. •Jawdropping example chronicled at Retraction Watch of a scientist who suggested a couple of peer reviewers... who were actually him. That’s extreme, but review cronyism is a thing. •There’s a “retraction” system for dealing with this in journals... but frankly, there’s no way everything gets caught that should be. •As for books, they’re retracted so seldom that a retraction is Really Big News. •Reading the Retraction Watch weblog highly recommended!
them. •Royalties are paid, but they rarely amount to more than occasional lunch money. (Jackpot = Go Big Read selection.) •So, scholars, duh. But it’s not quite that simple. •Scholars in the humanities write many more monographs than scientists do. •For humanities scholars, writing monographs is an enforced activity; most can’t NOT do it if they want any kind of career. •Not just any old book, or any old publisher, will do! •Neither trade books nor textbooks “count.” •Most publishers are not considered sufficiently “scholarly.” •Why not? Tenure and promotion, again. When I said ACADEMIC prestige, I meant it! •On the author side, this is all about the prestige economy, not the money!
universities to disseminate the work of their own scholars. (Little-known fact! Impress friends at parties!) •This came to be seen as incestuous, so instead, presses tend to specialize in subject areas parallel to their host university’s top specialties. •Scholarly societies •Small independent presses •PUBLISHER REPUTATION MATTERS TO AUTHORS. More than anything. •Your tenure/promotion committee won’t actually read your book; they might skim a review if they’re especially conscientious (and if your book was even reviewed!). •They just care that you published it somewhere considered “quality.” (If you sense that I have very little respect for this, you are 100% right.) •Comparison with library collection-development heuristics is left as an exercise for the student!
for reasons we’ll discuss. For now, take my word on it. •Other scholars in the field, a few •Um... uh... the author’s family? •Yes, there are exceptions (Go Big Read!), but they’re just that. •Inevitable corollary: to break even on these books, because so few sell, prices have to go through the roof. Forget profit. •Most scholarly presses settle for not losing too much money... but for a lot of perfectly cromulent books, even this is out of reach. Those books don’t get published. This is where the money economy damages the prestige economy. •Print runs were in the low thousands. Now they’re in the low hundreds, max.
evidence nobody checks them out from academic libraries. (Roughly half of checkoutable books in academic libraries are never checked out. Ever. Not even once.) •They’re not even assigned in college classrooms. •Yeah, the research has been done on this one. •Part of the problem is high prices for monographs, as discussed. •Ian Bogost critique: “vampire publishing” •Language NSFW, but worth reading: http://www.bogost.com/blog/ writing_books_people_want_to_r.shtml
subsidies, for university presses •These are drying up! University presses have been easy targets in bad-budget years. Can universities afford “prestige?” •Again, this means good books without much of a market don’t get published. •Answer 2: via expanding into local and/or trade publishing •... and using that to subsidize vampire monograph publishing •Answer 3: via subscription journals •We’ll talk about this next week. •Answer 4: via subsidies from conferences, for scholarly societies •Answer 5: they don’t.
though not always open-access, immediately or after an embargo. •A humanist’s first monograph is typically a revision of the dissertation. •But if the actual dissertation is OA, will a publisher publish that first monograph based on it? •The answer is “reply hazy.” Many will. Some won’t. Some claim they will in public, but say something different in backchannels. •And will a library buy it? •The answer used to be an unquestioning “yes.” I’m hearing that’s not quite as true any more.
•ProQuest is a LoC copyright depository, and also has significant analog preservation capacity. •PQ then turns around and sells them back to libraries, of course… •ProQuest gets a copyright license from the author. •Which the author predictably doesn’t read, and which entitles ProQuest to do things that some authors have found out about and don’t like. •E.g. selling copies on Amazon and Scribd. •My sympathy is limited. READ YOUR CONTRACTS, authors. •FYI, “ETD Librarian” is a real-world job title, and ETDs also form part of many academic-library jobs in scholarly communication.
for open access, ETDs, and/or other change in scholarly communication, the local university- press director is NOT your friend. •I hope I’ve given you some context for that. Feel free to ask more questions on the discussion forum. •Plenty of humanities scholars don’t like us either. •There is some seriously whacky conspiracy-theory stuff out there from humanists about libraries and ETDs. •Library/press mergers... are somewhat fraught. •My grapevine says that neither likes or understands the other, but the press tends to be nastier about it.
forms of credit: •Authorship (the big prize for scholars) •Publisher/publication name, correlates (somehow) with prestige •Acknowledgments (basically meaningless) •So write your own punchline: A tenured faculty member, a postdoc, three grad students, two librarians, and an IT professional work on a research project. •Who gets “author” credit on the published book/article? •Who gets any other career-helping credit whatever? •Greg Downey would wax voluble here about the erasure of labor. I’m not him, but this bugs me too. It’s deeply unfair.
byzantine “who can be an author?” rules. •Author ordering is significant for some science disciplines/journals, but not others. •You’ll also see wildly long author lists, in the hundreds! •An article with 1000+ authors made the rounds on social media. Physics is really, really weird about this, apparently. •Others erase all labor not the faculty member’s, and get upset when called on it. (HUMANITIES)
journals. LOTS AND LOTS AND LOTS. So many journals! •And as with trade publishing, there are scammers and bottom-feeders. •There’s a prestige hierarchy... but it’s a lot clearer toward the “top” than anywhere else. •“Glamour mags:” Science, Nature, Cell. •One glamour-mag publication can make a career. Seriously. •Some disciplines have their own discipline-specific glamour mags. •(If you sense I don’t respect the glamour-mag system, you are 100% right.) •So how do you pick where to publish? Glamour mags aside, how do you know which journal will help your career most? •Yeah, yeah. Supposedly you use all the quality-assurance criteria your librarian taught you as an undergrad. Supposedly. •Nobody actually does this. Everybody relies on prestige proxies.
the tenure track, don’t publish anywhere else. •Be careful, though. Some scam/bottom-feeder journals claim peer review they don’t do. •Acceptance/rejection rate •Theoretically, a high rejection rate signals a desirable journal that reviews everything very carefully and rejects all the dross. Pragmatically... •Also, journals... how can I put this? Lie. Rates aren’t audited, and there isn’t a standard way to calculate them anyway. •And what is the point of rejecting good work to game a number, for pity’s sake? •Printedness (or PDFness-not-HTMLness), mostly in the humanities •I KNOW, I KNOW. But too many scholars still think electronic automatically lacks prestige. This makes it hard for e-publishing to shift the system. •For e-journals, mostly in the sciences, DOIs •There’s nothing magic about a DOI. But scientists (!) think there is. •Journal Impact Factor, and other bibliometric measures.
knowledge and changing his/her discipline? •Humanities: Because s/he published in the right presses! •(This is a brainless answer. But letters of support aside, it’s the only answer the humanities unquestioningly accept for tenure.) •Sciences: bibliometrics! •Quantitative analysis of the STEM literature. Often synonymous with “citation analysis.” •Based on the idea that influence/impact can be measured by measuring which articles/journals cite which previous articles/ journals. •(Ignores the many reasons for citations, e.g. “this prior work is a load of hooey!”)
developers, to help them decide whether to subscribe to or drop a journal •Impressionistically: “How often was this journal cited in other journals within the last X years?” •This is a meaningless number by itself; it must be compared with other journals. •Patterns/frequency of citation vary considerably by discipline! This means you CANNOT legitimately compare a biology journal’s JIF with a sociology journal’s! (This may seem obvious, but it’s not; Australia brainlessly tried to allocate federal research funding across all disciplines via JIF.) •Calculation/publication of JIFs controlled by Thomson Reuters.
skew the numbers. •Ergo, a journal’s JIF says absolutely NOTHING about the impact or worth of any given article in the journal. •This one is crucial! Ask questions until you understand it! •Winner-take-all; high-JIF journals can coast while low-JIFs are hard to change. •Not transparent •People have tried to reproduce Thomson’s numbers. They failed. Thomson juices the numbers (against fraud?), but won’t say how. •Gameable and gamed •Need a highly-cited article to artificially raise your JIF? Commission a review article! World plus dog cites review articles! •“Coercive citation” (cite our journals or we won’t publish you!) a known, sleazy phenomenon. Thomson penalizes journals caught at this, but how many does it actually catch?
legion, by the way. It’s EVERYWHERE.) •If JIF can’t measure articles, how can it measure people? •We’re back to prestige-by-proxy again, how good you are dependent on where you’ve published. •This is EXACTLY the brainlessness I called out about monographs. •Lately: correlation between JIF and fraud increasing, correlation between JIF and individual-article citations decreasing. •“Importance” chase partly at fault, as best we can tell. •But the Internet may also be a root cause, as the importance of the journal and its brand wanes as a main mode of finding articles to read.
journal articles? Do we actually want it to be? •Digital humanities •Datasets •Code •Outputs of applied research (inventions, process innovations) •Is it really okay to consider only academic/scholarly impact? Couldn’t that mean a hermetically-sealed, narrow-minded academy?
•As a possible model, at least. •Open science: credit everybody! •There is a “role/credit taxonomy” from CASRAI out there. We’ll see what happens with it. •It leaves off literature-searchers (often librarians!) altogether. Love you too, CASRAI! •The humanities will kick. They always do, at anything that challenges the “lone Byronic genius” stereotype. •I am overgeneralizing, and to be fair, the Modern Language Association is working hard to make this better. But on the ground? In tenure and promotion hearings? Old-school still reigns.
mostly. •But this can raise retaliation risk, especially for junior scholars. •Post-publication review •... which, who is going to do this exactly? It’s hard enough to find people to review pre-publication! •Happening organically via social media, to some extent. •Increased replication •Social psychology has been in the news for fraud of late; this movement started there, but is slowly expanding in scope. •Pharmacology is another major target; huge history of fraud. •In my head, the real answer is not expecting too much. •But catch anybody actually agreeing with me.
measure, but adjusts for disciplinary differences and weights for prestige of citing journal •... but you STILL shouldn’t be using a journal measure to judge people! •H-index (and various similar indices) •Article measure, combinable into a person measure •Highest x, where x=number of articles with at least x citations each •Google Scholar says mine is 7 (5 since 2009), so I have written seven things with seven or more citations each. (Yes, you can look this up!) •Breaks the tyranny of the “glamour mag/good journal” to some extent; if you publish highly-cited stuff in non-glamour-mags, you still win.
on-the-ground, one department at a time thing. It’s just too ingrained. •Only other thing I can imagine shifting this is a declaration from something like the AAUP that JIF use in tenure and promotion is flatly unethical. •I mean, it is! But, you know, try to get people to say that!
out a preprint of a to-be-published article. •It went GONZO on Twitter, then got picked up by a couple of major newspapers and several well-regarded blogs. •It’s gotten tens of thousands of downloads! •How do you explain this to your tenure committee? Can you get them to think this is cool? •Remember, tenure is classically all about SCHOLARLY impact. •And none of this shows up (except indirectly) in a JIF! Even if it did, given the pace of publishing, it’d usually take years for the citations to pile up.
know, that newfangled thing called “the Internet” all the kids these days are using. •We can measure some of that! Imagine. •Downloads from a journal, repository, or other website, for starters •Reactions on social media, blogs •Bookmarks, saves to citation managers •Research ongoing about how well these measures correlate with eventual citations •There’s also a question outstanding about how much the academy should value impact/notice outside the academy! Especially important for fields with policy or practice implications.
in tenure and promotion packages, kind of unofficially, and grapevine says they’re heeded. •The T&P process is so fraught that committees grasp at anything that looks “objective,” which in practice means anything that’s a number. •Even if they don’t know what the number means! Even if (like JIF) the number is a load of horse manure! •But official-ness for altmetrics will take time yet. •Funders will probably use them in grant reviews sooner than T&P committees will. •Disciplines with policy or practice implications will be faster too. (E.g. “translational medicine.”) •One funeral at a time, as they say...