We’re happy to announce today the launch of an Open Science group which will meet monthly in New York City and discuss Open Science, data-driven science, scientific transparency and reproducibility and the future of scholarly writing and publishing. NEW YORK OPEN SCIENCE MEETUP Please join us to keep posted about future events. P.s. There will be PIZZA.
Still writing your own documents? That’s so 3200 BCE! At Authorea, we take digital publishing so seriously, we want to WRITE YOUR PAPER FOR YOU! Partnering with SCIGEN (who we briefly profiled last week), writing, submitting and disseminating your research will never again require a keyboard. Scholarly research and writing have never been easier. By stitching together a home-brewed soup of technical lingo, a cursory glance at your paper will yield resounding “Oh, hum, yes?”s — we guarantee! Concerned your manuscript won’t get accepted for publication? SCIgen has a proven record that _many_ of their randomly generated manuscripts have made it in to “peer reviewed” journals. And don’t worry, this is only horrifying if you aren’t in on it! Best of all, you can use a private article on Authorea to keep unwanted questions and comments out of the equation. Open and honest peer review is after all too dangerous an experiment, when we have venues ready and capable of accepting your SCIgen derived piece of utter genius. Happy Writing! ;)
On the left, you’ll see a little clock icon - this opens the article’s HISTORY: a Git-based log of updates and edits authors have applied to the article. This post should hopefully only have one entry as it’s short and typed in one sitting (_edit: this is never the case_), but we all make mistakes. Two interesting ideas to meditate on, w.r.t. science and scholarly communication: 1.) What would a Git history look like for an entire piece of research, or even just the many iterations of a single experimental procedure? GitHub does this for software development of course (we can integrate your articles with your GitHub repos by the way), but there’s a whole untapped academic ecosystem - how do thoughts mature and develop in other fields? 2.) If you had a _Git History of Science_, there would be so many re-additions and re-deletions and entire huge sections removed (phlogiston, anyone?), Compare views would be a wash of green and red. How many “mistakes” have been made and re-made over time? What could we learn from the trends and developments of knowledge? SCIENCE IS REALLY A PROCESS AND A WAY OF THINKING. Why aren’t we keeping better track of the thought process and showing errors made along the way? It would help us build or fork better off each others’ works for one thing. Less redundancies and unnecessary pitfalls as well. Plus “mistakes” are a helpful and fateful force in the scientific process itself. Think about any great thinker, writer, artist, maker. I bet any of their rough drafts would seem pretty valuable now. In what other ways might we benefits from having detailed histories of inventive, creative, and thoughtful processes?
Perhaps you have heard of the peer review fraud scandals rocking several big journals. Rings of researchers’ quid pro quo favorable reviews; PIs reviewing their own work unbeknownst to editors; probably other bad things that we haven’t found out about yet. Or perhaps you remember prank paper generator SCIgen: it has produced many nonsensical manuscripts that were “peer reviewed”, accepted, and later and embarrassingly retracted. To combat the systemic problem these jokes expose, Springer designed SciDetect to do the job a “peer” should be able to do in the first place – spot blatantly obvious bullshit. Maybe you even know of “soft fraud” – knowing that editors have sympathies or vested interests in a sub-discipline at _Journal X_; reaching out to an old colleague likely to review your manuscript; frequently collaborating with big name PIs whose brand has more clout than carefully done and clearly communicated science likely ever could. WHAT CAN WE DO?! That is the question. Certainly _Nature_ charging authors for faster peer review is not an intended answer . At Authorea, we think all levels of the scientific process would benefit from some openness and transparency. While different researchers might draw different lines, experimenting with open peer review seems like a good place to start (its kind of astounding that post-publication open review isn’t widely practiced yet). Open up your work to the light of day and get some honest open feedback that makes it better – what if adding more eyes brought about changes that got your manuscript accepted to a higher tier journal than you hoped? If that’s a solidly achievable best case, what’s the worst case? “BUT WHAT IF I GET SCOOPED?” This is always meant as the inevitable and terrible outcome of open access. To ensure speed, maybe you specify a time frame. To ensure security, maybe you specify no anonymous viewing or commenting. But really, that won’t change much. Without any data (open or paywalled), I’m pretty confident the majority of “scooping” incidents are the result of many players shooting for the same goals, smart people working hard, and good old-fashioned word of mouth. Maybe if we shared more we’d all get so much further! That’s the thing: as scientists we are proud of our work. We publish to show the world, so why not show it off sooner? Get credit faster? Get more feedback and make more useful connections? These represent some major features of the Internet that researchers are still chronically under-utilizing, and it was invented for us! THIS IS THE 21ST CENTURY. WE SHOULD SCIENCE LIKE IT.
Philippus Aureolus Theophrastus Bombastus von Hohenheim, self-styled as PARACELSUS, was a Swiss-German polymath and occultist active in the early 1500s. Notable among his many contributions (including the designation “father of toxicology”) was his emphasis on observation when knowledge from the past held in highest regard. This belief, admittedly revolutionary at the time, was further reflected in his personal motto: _alterius non sit qui suus esse potest_ (“let no man belong to another who can belong to himself”). He refused to follow centuries-old schools of thought, relying on his own wits to understand the world around him. Paracelsus’s defiant independence naturally clashed with authorities, only serving to stoke his ego (see quote below). His challenges to traditional medicine, advocacy for observation as the path to knowledge, and use of common language for scholarly communication (learned individuals only lectured in Latin) all reflect changes society still struggles with today. WHAT CAN WE LEARN ABOUT SCIENCE FROM A 16TH CENTURY MYSTIC? Science, compared to other fields like math or art or finance, is formally a recent development. The first text to resemble a modern journal article - Galileo’s _Starry Messenger_ - like Paracelsus and his philosophy, is prophetic of open science and data. Paracelsus believed knowledge and the information behind it should be wide-spread (e.g. even physicians of his time were comparably educated with barbers and butchers ) as well as rigorously examined and questioned. He also thought he was incredibly smart:
FABIO, WHEN DID YOU DECIDE TO GO WATCH AN ECLIPSE IN THE ARCTIC? I’ve been feeling this urge to visit the northernmost parts of Earth for a while now. My PhD in Stockholm gave me the opportunity to explore the Norwegian coastline and Lapland, but the Arctic was a different story. A sort of forbidden dream. Then last year I started a postdoc at Yale, in the research group led by John Wettlaufer, who’s an expert on sea ice and the Arctic. When I heard there was gonna be a total solar eclipse at Svalbard I knew I had to go. WHERE IS SVALBARD, EXACTLY? Svalbard is an archipelago situated about half way between continental Norway and the North Pole, and it is an outpost for research and arctic exploration. In Longyearbyen, a little city of about 2000 people, and Svalbard’s capital, there is the world’s northernmost institution for higher education and research: the University Center in Svalbard.
A recent article titled The spin rate of pre-collapse stellar cores: wave driven angular momentum transport in massive stars was written on Authorea and submitted to the Astrophysical Journal (ApJ) and to the arXiv as a pre-print. While waiting on peer review from the ApJ, the authors want to test Authorea as a platform for OPEN PEER-REVIEW. By going to the document’s page, you can comment on a section, figure, observation, sentence, or the whole piece. The authors and other commenters can respond and further the discussion. And it’s all out in the open, just how science was meant to be. But it doesn’t stop there. You can also view full-size, high-resolution versions of the paper’s figures, as well as easily follow links in the References at the bottom of the page. In the paper, show for the first time how internal gravity waves, excited in the turbulent layers of stars at least ten times larger than the Sun, can radically change their internal rotation rate. In particular, these waves – somewhat analogous to ocean waves – can determine how rapidly the stellar core spins around its axis when the star is about to die and become a supernova. The spin of a pre-supernova core is important because it deeply affects the stellar explosion and determines the rotation rate of the stellar remnant (neutron star or black hole).
WELCOME TO THE PITCHFORK PARTY This post comes in the context of a series of healthy discussion pieces on authoring scientific content for the web: - LaTeX was not built for the Web by Alberto and Nate from Authorea. - LaTeX Something Something Darkside by Peter Krautzberger from MathJaX. - A Scholarly MarkDown discussion on Hacker News (see the comments) In this text, I will try to elaborate on the merits and deficiencies of using a pre-web authoring syntax, LaTeX, for writing modern publications in 2015 as active web documents. My stance is evolutionary – we should adapt our existing tools to the new environment and in the process gain insights for what the next generation of tools ought to be. If you are a working scientist who authors in LaTeX, I will suggest how to gradually adapt your existing toolchain, while making your first steps towards the future of publishing. If you don’t find the technical details interesting, you can skip to my suggestion in Section [sec:conclusion]. If you are a developer, I will argue with you that the next generation has not fully arrived yet. We’re not going to start a fire Feelings can burn strong when the words “LaTeX” and “Web” appear together. Debates over tool superiority, especially when online, tend to quickly become heated and destructive. My best guess is that the personal experiences with our tools over time evolve into full-blown relationships, with all associated pros and cons of that status. Maybe you truly love your tool, and that is great, please go ahead and nourish that feeling. Meanwhile, I will step back into more abstract territory and try to poke some applications with a stick and see when they bite. You’re welcome to tag along, but there’s no need for extra venom.  The author has his own torch in hand: I am a core contributor to LaTeXML and an enthusiastic developer at Authorea. So keep that bias in mind while reading on.
THE SYSTEM AS IT STANDS A study published in July 2014 used the Freedom of Information Act to request access to contracts between academic publishers and 55 university and 12 consortia of libraries . 360 contracts were received, documenting prices and bundling of deals from 9 major publishers (including Elsevier, Springer, Wiley, ACS, and Oxford University Press). The contracts show the result of opaque sales practices, manipulation, and varying degrees of negotiation skill: publishers can charge vastly different prices for the same products and services. Keep in mind they are selling to nonprofit institutions whose members - conduct groundbreaking and lifesaving research (often taxpayer-funded) - volunteer their time and talent to the publishers’ peer review process - pay for the submission of articles published in journals - and are now buying it all back. Also keep in mind that top publishers have profit margins on the order 30% or more. In the mid 1990s, with the shift from print-only to digital distribution, economic formulations changed. No longer would a research university _need_ to subscribe to multiple copies of in-demand journals. No longer would storage space play a significant role in decisions (e.g. storage and maintenance costs for a 2500 page journal volume range from $300-1000). No longer would impact be a limiting factor for purchased titles, or as it’s now emerging, should it even be. And publishers could now offer their whole catalog of journals at one discounted “Big Deal” price. In the words of Derk Haank, then Elsevier and current Springer CEO: But what it [electronic publishing] does do is to _DRAMATICALLY LOWER THE MARGINAL COSTS OF ALLOWING ACCESS_.... [The cost for each new users] is virtually nil and that means that we should be more creative in the business model.... where we make a deal with the university, the consortia or the whole country, where we say for this amount we will allow all your people to use our material, unlimited, 24 hours per day. And, basically the price then depends on a rough estimate of how useful is that product for you; and we can adjust it over time. [emphasis added] Here, “adjust it over time” means mandate an average 5-6% price increase annually. Bergstrom, et al calculate: “A bundle whose price increased by 5.5% per year would DOUBLE ITS PRICE BETWEEN 1999 AND 2012, whereas over the same period the US consumer price index rose by 38%.” [emphasis added] What’s more, such “creative” business models force library administrators to try to quantify abstractions like the value of information. Information, however, is context dependent. The difference of opinion on a paper’s importance could range from “meaningless” to a critical insight for unraveling a disease pathway. At the end of the day, an all-inclusive “Big Deal” bundle may be easiest – if funds are available. When cost limits access, however, researchers may rely on e-mailed PDFs from helpful colleagues at better-equipped campuses. Another solution, when access is out of reach or publication slow (e.g. a year from initial acceptance to publication is common for some Statistics journals), is pre-print repositories like arXiv. Unfortunately, the articles aren’t peer-reviewed, a reason big publishers can charge so much. This is also a reason we think researchers (and journals!) might want to try their own pilot study of Authorea-as-interactive-repository or submission platform. This is the 21st Century, scientists should be writing and disseminating like it! Have thoughts about this? Let us know in the Comments or follow us to get updates!
Tens of thousands of innovators met in Austin, Texas last week to discuss emerging tech, science, and innovation. It was the Interactive portion of South by Southwest (SxSW). Authorea was there. Among many great events, the MIT Media Lab presented “SOLVE”, an initiative set to bring together the most gifted researchers and innovators to identify and tackle challenges where new thinking and emerging technologies have the potential to make the world a better place. SOLVE identified four main themes: Learn, Cure, Fuel, Make.
Leave the Abstract empty if your article falls under any of the following categories: Editorial Book Review, Commentary, Field Grand Challenge, Opinion or specialty Grand Challenge. As a primary goal, the abstract should render the general significance and conceptual advance of the work clearly accessible to a broad readership. References should not be cited in the abstract. Refer to http://www.frontiersin.org/ or TABLE [TAB:01] for abstract requirement and length according to article type. KEYWORDS: Text Text Text Text Text Text Text Text. All article types: you may provide up to 8 keywords; at least 5 are mandatory.
Today we are proud to announce the winners of our travel grant for European student attendees of the APS March meeting in San Antonio, TX. WHY DID AUTHOREA SPONSOR THESE TRAVEL GRANTS? At Authorea, we want to build bridges between scholars, disciplines, and cultures in order to form a collaborative scholarly community at a global scale. Sometimes, _face-to-face meetings are the best catalysts for sharing and creating new connections_. Two of us at Authorea - Alberto and Matteo - are from Italy. They have benefited from academic careers abroad (postdocs at Harvard and University of California, Santa Barbara, respectively) also thanks to important connections they made at international conferences in the early stages of their academic career. Our winners for the March Meeting are ALBERTO DE LA TORRE and JUAN TRASTOY QUINTELA, both from Spain. We hope that the connections they made at the March Meeting will bring fruitful collaborations.
Alberto Pepe is the co-founder of Authorea. He is also an Associate Research Scientist at HARVARD UNIVERSITY where he recently finished a Postdoctorate in Astrophysics. During his postdoctorate, Alberto was also a fellow of the BERKMAN CENTER FOR INTERNET AND SOCIETY and the INSTITUTE FOR QUANTITATIVE SOCIAL SCIENCE. Alberto is the author of 30 publications in the fields of Information Science, Data Science, Computational Social Science, and Astrophysics. He obtained his Ph.D. in Information Science from the UNIVERSITY OF CALIFORNIA, LOS ANGELES with a dissertation on scientific collaboration networks which was awarded with the Best Dissertation Award by the American Society for Information Science and Technology (ASIS&T). Prior to starting his Ph.D., Alberto worked in the Information Technology Department of CERN, in Geneva, Switzerland, where he worked on data repository software and also promoted Open Access among particle physicists. Alberto holds a M.Sc. in Computer Science and a B.Sc. in Astrophysics, both from UNIVERSITY COLLEGE LONDON, U.K. Alberto was born and raised in the wine-making town of Manduria, in Puglia, Southern Italy. Email: email@example.com Twitter: @albertopepe
Lab 10 introduced the use of a microprocessor in the FPGA. I was able to compile the provided circuit in Quartus and execute a program that would use the DIP switches to save a value in memeory which an incrememtor would use to count up. This lab was extremely useful in showing the infrastructure of a microprocessor.
WHO’S GOING TO BE AT SXSW THIS WEEKEND? Matteo and Alberto will be there as part of an event called ffMassive. If you are going to be in Austin, TX on Sunday March 15, stop by for drinks, life size jenga, and to learn more about music, tech, and science, of course. Here’s the RSVP link: https://ffmassive2015.eventbrite.com/ We have a couple of VIP tickets left for the after party (invite-only). Interested? Just let us know at firstname.lastname@example.org
Lab 9 introduced the use of memory in order to use the data in a text file to run an audio player. I was able to compile the provided circuit and send an audio file over the serial port to the FPGA which played the song. This lab was very useful in demonstrating the use of on board memory in the context of file i/o which is an extremely useful part of any digital circuit that does not do a specific task or needs to access data that is store on an external device like a hard drive.