Monthly Archives: September 2016

Humans of Data 4

img_3758_small1“So when I was a kid, obviously Star Trek was the thing, because it was our better selves in the 23rd century. Civil rights, women’s rights, all those issues that were happening at that time in the 1960s were simplified in that show. But the thing that got me was the computer. Spock would have this conversation: ‘Computer, what is this thing? What was the global temperature in 1934?’ And there was always an answer. My start with data was looking at how instruments recorded it. As I’ve started to get into managing people, writing code, I’ve realised that we’re the people in someone else’s past. If we don’t get it right, they will suffer. They’ll ask the question, and the computer won’t have an answer. These people are all trying to get to that better 23rd century. It’s slow progress, baby steps. But being able to make sense of the research results that we take now, consolidating that, is really important to me.”

Open Data as a Moving Target: What Does it Take to Allow Reuse?

By Irene Pasquetto

As we all know too well, making all scientific data technically and legally accessible to img_20160913_133322all researchers is an ambitious task complicated by constantly evolving social and technical barriers. It is fair to say that we are making progresses in this direction. At Scidatacon 2016, we examined several concrete solutions that can facilitate openness of scientific data or, if you prefer, make sure data are FAIR (findable, accessible, interoperable and reusable).

However, it seems that the more we learn about how to make data open, the least we know about how exactly data will be reused by the scientific community, which means by the researchers who generated the data and should have a primary interest in accessing it. Very few empirical studies exist on the extent to which open data are used and reused once deposited in open repositories.

The fulcrum of the problem is that data take many forms, and are produced, managed and img_20160913_133754used by diverse communities for different purposes. Nevertheless, different stakeholders (publishers, data curators, digital librarians, funders, scientists etc.) bear competing points of view on the kind of policies, values, and infrastructural solutions necessary to make data open. During a session moderated by Christine Borgman (UCLA) and titled “How, When, and Why are Data Open? Competing Perspectives on Open Data in Science”, Matthew Mayernik (National Center for Atmospheric Research), Parsons Mark (National Snow and Ice Data Center) and Irene Pasquetto (UCLA – Center for Knowledge Infrastructures) presented on some of those challenges that make the use and reuse of “open data” such a complicated and heterogeneous process.

Mayernik argued that the integration of the Internet into research institutions has changed the img_20160913_140145kinds of accountabilities that apply to research data. On one hand, open data policies expect researchers to be accountable for creating data and metadata that support data sharing and reuse in a broad sense, in many cases, to any possible digital user in the world. On the other hand, providing accounts of data practices that satisfy every possible user is in most cases impossible.

In his talk, Parsons effectively showed that data access is an ongoing process, not a one-time img_20160913_134549event. Parsons and his team examined how the data repositories products and their curation have evolved over time in response to environmental events and increasing scientific and public demand over several decades. The products have evolved in conjunction with the needs of a changing and expanding designated user community. In other words, Parsons’ case study shows that it is difficult to predict the users of a data service because new and unexpected audiences (with specific needs) could emerge at any time. Parsons also argued that, for this reason, “data generators” may not be the best individuals to predict future uses of their own data.

Because open data users change over time, it is also necessary to built open repositories that provide data in formats flexible enough to allow different approaches to data analysis and integration, for different audiences. This was the point made by Pasquetto, whose case study is a consortium for data sharing in craniofacial research, with a focus on the subfield of developmental/evolutionary biology that recently adopted genomics approaches to knowledge discovery. Pasquetto found flexible data integration to be a necessary precursor to using and reusing data. “Data integration work” is the most contested and problematic task faced by the community, where data need to be integrated at two or more levels and these levels require extensive collaboration between engineers, biologists, and bioinformaticians.

Borgman also presented a paper on the beneath of Ashley Sands, who recently graduated img_20160913_135716from the department of Information Studies at UCLA and is now senior program officer at the Institute of Museum and Library Services in Washington DC. This talk examined characteristics of openness in the collection, dissemination, and reuse of data in two astronomy sky survey case studies: the Sloan Digital Sky Survey (SDSS) and the Large Synoptic Survey Telescope (LSST). Discussion included how the SDSS and LSST data, and datasets derived from the projects by end users, become available for reuse. Sands found that the rate at which data are released, the populations to which the data are made open, the length of time data creators plan to make the data available, the scale at which these endeavors take place, and the stages of these two projects all have great impact to the extent in which data and then reused.

Moral of the story: open data is a fast moving target. In order to enable reuse, data repositories better start to run.

Humans of Data 3

img_3715“I find it relaxing to work with data.  I’m a mathematician by training and much more into applied mathematics, so I find recursive formulas very relaxing and linear algebra is like a fun puzzle, like a crossword.  I like problem solving.  ‘Big data’ is an excellent field for problem solving.  I like finding elegant solutions to complex problems.  I approach problem solving slightly off-kilter from others – I would often get weird grades in school, but it also means that if people give me problems they’re struggling with, I could look at it and come up with something different from them.  This is my first data science meeting.  I’m enjoying the opportunity and being around mathematicians and database people and folks who get excited by data.  And I’m pleased that there are other women I can talk to.”

Humans of Data 2

img_3635-copy“One of the coolest thing is starting out as a student in the research data management field, being early in my career, and then being able to interact with the same people over time. I feel like I’m kind of growing up as an individual. I feel I can say, hey, you guys made an impact on what I do, and now I can give back.”

Humans of Data 1

img_3656_small“I think you need to express yourself the way you feel you should, because what really matters at this conference is that we’re all interested in making data available, accessible and preserving it, and we shouldn’t feel that we have to sacrifice who we are in part or whole, in order to do our work.

I hear far more people who are complimentary about the way I dress than not, so it’s not like it’s problematic. But it shouldn’t matter anyway. We have to just keep being who we are, and the other people will catch up.”

Humans of Data

At events like International Data Week, much discussion has happened around the technical and legislative challenges and opportunities relating to research data. But in many presentations and group meetings, we have repeatedly heard that our human behaviour – our desires, ambitions, fears, traditions and habits – shape how effectively we create, manage, share and reuse research data assets, and how open we are to collaborating on research data infrastructure.  As many speakers have noted, the technical challenges are usually susceptible to scoping and tackling, but the really intricate work is the work of creating social change and new behaviours.

As an artist and a researcher, I’m passionate about digital curation, digital preservation and research data management, and how those skills are useful to everyone in contemporary society to one extent or another.  And I’m also passionate about the way that research data – and visual art – have so much potential to transform our lives, societies and the world around us.  As I’ve continued to attend data-related conferences, I’ve become fascinated with this human element. I also noted that the International Data Week crowd is a welcome mix of nationalities, genders, ages and ethnicities.  It’s critical that our conversations include people unlike ourselves, and there is so much to be gained from getting to know each other better in order to build the kinds of relationships that can help us make progress across communities, nationalities and disciplines.

To that end, I launched a project called “Humans of Data’. It’s a really simple idea – basically the same as the ‘Humans of New York’ project online, where there is a photo and a quote from each (unnamed) person.  I hope this helps to get a more personal, human conversation going amongst the amazing people I meet at data conferences all over the world, connecting with their lives as individuals and having them say something about what they’re passionate about when it comes to data-related issues.

I’ll be posting the ‘Humans of Data’ here on the CODATA blog as each photo and quote becomes available. If you’d like to view them as a group, please click the ‘Humans of Data’ category to group these posts together. If you’d like to participate, please email me at laura.molloy AT oii.ox.ac.uk, or contact me via Twitter @LM_HATII.

Looking forward to our collaboration!

Scidatacon: Opening keynotes

It was a pleasure to start off the first full day of SciDataCon with a keynote from Elaine M Faustman, Professor and Director at the Institute for Risk Analysis and Risk Communication, University of Washington and member of the ICSU World Data System Scientific Committee.  Professor Faustman’s keynote talk, ‘Challenges and Opportunities with Citizen Science:  How a decade of opening1experiences have shaped our forward paths’, introduced a welcome early focus on the importance of rigorous ethical approaches to ‘citizen science’ research projects. Looking back to the early roots of the knowledge practices we now call ‘science’, Faustman reminded us that of course the roots of these practices can be found in the work of European gentleman scientists and their cabinets of curiosities.   She also situated contemporary citizen science practice in the US legislatory framework of the US citizen’s right to know, work which has been underpinned by standards and acts since the 1940s onwards.

Reflecting over a decade of citizen science practice in the environment and public health domains, Faustman provided examples of projects where citizens are not only research subjects but are centrally influential in the work, to the point where they express ownership of the project alongside the university team.  Discussion focused on the importance of the abilty of research participants to influence the direction and scope of the research project, to provide feedback on its progress, and to have access to the data accrued in order to be able – in case of public health projects at least – to use it to guide their ownopening4 decision-making.  The message of deep ethical engagement and building respectful relationships with participants set the scene for a day in which ethical issues reverberated.

The second keynote was by Simon Cox, Research Scientist, Environmental Informatics, CSIRO Land and Water, Clayton, Melbourne, Australia.  In his talk, ‘What does that symbol mean? – controlled vocabularies and vocabulary services’, Cox raised a very pragmatic point about the widespread problem of non-systematic use of symbols – and keywords – in data.  He demonstrated that we assume symbols and keywords have some sort of shared meaning, at least in a given community, but that the reality is much less systematic. Symbols and abbreviations with no widely used consistent meaning are often used by researchers when creating data. Populaopening6r terms describing volume can mean entirely different things in different countries.  And even symbols of terms describing a widely understood measurement, such as the metre, can be problematic link to a common source: the International Bureau of Weights and Measures provides a definition, which can be found via a given URI. But the fact that this URI has changed regularly from year to year disrupts any expectation of a stable, enduring location for this definition.

Cox suggested a couple of actions to mitigate this situation. Firstly, a new CODATA task group on coordinating data standards will take this work forward. Secondly, the Global Agricultural Concept Scheme – GACS – is the result of three defining sources from agricultural research banding together to deduplicate their respective vocabularies and make them interoperable for agricultural researchers. Cox noted that the technical job is not large but that – in confluence with Faustman’s earlier message – the really big job is achieving the buy-in from the community in question.

So the pesky human dimension appears right at the start of International Data Week!  More information on the keynotes is at http://www.scidatacon.org/site/opening-keynote/

Laura Molloy is a doctoral researcher at the Oxford Internet Institute and the Ruskin School of Art, University of Oxford. She is on Twitter at @LM_HATII.

How to Address Data Challenges in the Biomedical field: Solutions for Data Access, Sharing and Reuse

Irene Pasquetto is a PhD Student in Information Studies at UCLA.

As scientists in the biomedical fields are generating more and more diverse data, the real elaine-m-faustmanquestion today is not only how to make data “sharable” or “open”, but also, and especially, useful and reusable. At Scidatacon 2016, speakers from funding agencies, research universities, data research institutions, and the publishing industry came together to try to address this key question.

Around 20 highly interdisciplinary papers organized in four busy sessions addressed the problem from different perspectives, while agreeing on an essential point: developing new, open frameworks and guidelines is not enough. Indeed, what characterized this last edition of Scidatacon was a focus on proposing and discussing applicable solutions that can address the management, use, and reuse of large scale datasets in biomedicine today, right now.

Three main themes emerged across the sessions:img_20160912_125708

  1. How to enable scientific reproducibility.
  2. How to apply data science techniques to biological research.
  3. How to make heterogeneous bio-databases globally interoperable.

#1 HOW TO ENABLE SCIENTIFIC REPRODUCIBILITY

Leslie McIntosh (Director, Center for Biomedical Informatics Washington University in St. Louis) moderated session 1, which focused on the first topic: Solving the problem of reproducibility in science, starting from making jennie-larkin-biomedical-data-stewardshipbiomedical data reusable to this end.

Tim Errington (Center for Open Science) offered a clear and useful distinction between reproducibility, which he defined as the possibility of re-running the experiment the way it was originally conducted, and replicability, which is the possibility of getting the same results by reusing the same methods of data collection and analysis with novel data. Errington invited the audience to reflect on two main issues: first, incentives for individual success are focused on “getting it  published, not getting it right,” and second, instead of focusing on problems with either open access or open data, we should think about “open workflows” that include the whole process of scientific research.

Similarly, Anthony Juehne (Washington University in St. Louis) talked about how to address reproducibility issues step by step across the entire “scientific workflow”. Juehne presented to possible solution to the problem: “Wrap, Link, and Cite” data products OR “Contain and Visualize” them using virtual machines.

Finally, Cynthia Hudson Vitale exposed a rarely addressed aspect in the reproducibility community, which is the fundamental role played by biocurators. While their work is often not acknowledged in the community, biocurators are those who de-facto do the hard job of cleaning and organizing the data in a way that can be used to reproduce experiments. Cynthia proposed some concrete solutions to the problem. First, domain reproducibility articles need to include a greater variety of curation treatments. And, second, curators need to publish in domain journals to ensure the full breadth of curation treatments is discussed with researchers.

#2 HOW TO APPLY DATA SCIENCE TECHNIQUES TO BIOLOGY RESEARCH

A second main theme that emerged in session 2 was how to apply recent statistical and jiawei-han-large-scale-biological-text-mining-and-data-analysis jiawei-han-panel-large-scale-biological-text-mining-and-data-analysiscomputational cutting-edge techniques for data science (machine learning algorithms, deep learning text mining) to the biomedical knowledge discovery process. Introduced and moderated by Jiawei Han (University of Illinois at Urbana-Champaign), computer scientists, biologists and biomedical researchers working on biological text mining presented overviews and surveys on the topic.

Beth Sydney Linas and Wendy Nilsen from IIS, the Division of Information and Intelligent Systems (NSF – National Cancer Moonshot), gave an overview of how data science can be used to uncover the underlying mechanisms that drive cancer and the development of methods that will allow clinical researchers to eliminate the disease. The researchers concluded that the future of novel computing (especially machine learning, artificial intelligence, network analysis, database mining as well as bioinformatics and image analysis) needs to be directed also as it relates to health related research.

Elaine M. Faustman (University of Washington) presented an annotated database of DNA and protein sequences derived from environmental sequences showing AR in laboratory experiments. The database aims to help fulfill the current lack of knowledge on the relations between antibiotics resistant genes present in the environment and genomic sequences derived from clinical antibiotic resistant isolates.

Jiawei Han, Heng Ji, Peipei Ping, Wei Wang presented results from their analysis of massive collection of biomedical texts from medical research literature using semi-supervised text mining. The researchers argued that interesting biological entities and relationships that are currently “lost” in unstructured data can be efficiently re-discovered by applying bio-text mining techniques to PubMed massive biological text corpus.

# 3 HOW TO MAKE HETEROGENEOUS BIO-DATABASES GLOBALLY INTEROPERABLE

Finally, over 10 presenters in session 3 and 4 shared their own first hand experiences in susanna-assunta-sansone-biomedical-data-stewardshipmanaging and building biomedical integrated databases and making them interoperable. The biomedical research community and funders seek to make their research resources “FAIR”: findable, accessible, interoperable, and reusable, and also seek to strengthen incentives to support improved data stewardship by addressing incentives, such as data citation. Speakers shared a common concern: how to create data standards and practices from the bottom-up. As suggested by the speakers, it is necessary to be aware of existing local, cultural and social incentives, clearly define possible audiences, and involve the scientists in the database-building process. Individual projects can be consulted at the sessions’ webpage: http://www.scidatacon.org/2016/sessions/34/

The Data-At-Risk Task Group (DAR-TG) In Expansive Mood

This post comes from Elizabeth Griffin, chair of the Data at Risk Task Group

Wherdiwe go? Boulder. Whadiwe get? Bolder! Whediwe get it? Now!! (or, to be precise, dar-workshop10-09-2016this past week, Sept 8–9). Whawill we do? MAke Things Happen!!!

Over 50 of us were able to drop everything and get to NCAR in Boulder (CO, USA) for a 2-day Workshop on the Rescue oDatAt Risk (defined as raw or meagerly-reduced data in non-electronic or primitive digital media and formats, often with separated or insufficient metadata, and all without promise of adequate preservation). We came from most quarters of the globe: Tasmania, South Africa, Ethiopia, India, Italy and England, as well as from Canada and the USA itself. Graciously hosted by NCAR at its Center Green site, and generously sponsored by the RDA, Elsevier and the Sloan Foundation, this Workshop was without doubt a scene of Work, demanding the full attention of everyone through 5 organized 1-hour break-out sessions to discuss the 5 themes of the meeting: (1) locating (and often rescuing) “at risk” data, (2) preserving them for the longer term, (3) digitizing them, (4) adding (and preserving) necessary metadata, and (5) depositing and disseminating the end products appropriately. Oral case studies and reports set the individual scenes, and numerous posters provided additional thought-provoking materials. We all “worked”, and we all scrutinized what was being offered before “shopping”, and at the end of the two days our boldness had seen true growth.

Parallel responses to questions posed to each break-out group are now furnishing input to on-line Guidelines for Rescuing Data At Risk, which DAR-TG will produce, and prompted ideas for the reference handbook (just a little further down the line) which will also be prepared.

Our determination to “MAke Things Happen” also engendered commitments (1) to run sub-TG groups with specific foci on (a) metadata, (b) catalogues of at-risk data rescued or to-be rescued and (c) the location and preservation of hardware (aka tape-readers and their ilk) and science- specific software, (2) to organize “regional” workshops as a means to engage the great many other interested parties which are also “out there”, and (3) to fund and appoint an early-career Fellow to coordinate a TG-wide investigation of a specific them (tbd, possibly “Water in the World”) where just about every facet of “at-risk” data, from the earth’s atmosphere down to its fossils, has invaluable evidence to contribute.

These plans will of course take time and effort, and some of them resources too, and even formulating them was itself quite exhausting (despite the scrumptious refreshments and meals created and served by the bountiful UCAR kitchen), but our “demonstration” proved without doubt that consolidating and proliferating our re-channelled ideas and objectives will and must be MAde To Happen. The ultimate humanitarian benefits, even just in the domain of meteorology in tropical countries, as featured in the heart-rending details given by Rick Crouthamel’s Public Lecture on “A World Heritage in Peril”, will be more than ample rewards.

We boldeon!

Eugene Eremchenko: Candidacy for CODATA Executive Committee

This is the seventeen in the series of short statements from candidates in the forthcoming CODATA Elections. Eugene Eremchenko is a new candidate for the CODATA Executive Committee as an Ordinary Member.  He was nominated by WDS

The main area of interest of Eugene Eremchenko is in fundamental geospatial issues: new codata_een_photo_400_600trends in cartography, the Situation Awareness concept, net-centricity, geospatial aspects of decision making, the theory of signs (semiotics). The new data era requires new methods in cartography and new ways of working with geodata. This belief is shared by many scientist. It is rightly called the the  ‘Geospatial Revolution’. The core concept of this revolution is that of the ‘Digital Earth’.

Eugene Eremchenko gave first intentional definition of Neogeography (2008), made first Russian open 3D-model of real cities (2005-2007). He formulated the paradox associated with the ‘signless’ perception of space (‘netcentricity paradox’, 2011)’ and proposed the concept of ‘zero sign’ for solving this paradox. Also he is one of co-authors of periodical tables of maps and methods of scientific visualization, as well as the concept of ‘superholography’ (2015-2016).

Eugene Eremchenko is the Head of Neogeography R&D Group (Protvino, Russia), a scientific researcher in Lomonosov Moscow State University (Moscow, Russia), scientific directior of Technopark Protvino (Protvino, Russia). Eugene Eremchenko is the secretary of the ICA commission ‘GIS for Sustainable Development’ (since 2015). He is an elected member of the council of the International Society for Digital Earth (ISDE) (since 2016) and co-chair of the Outreach Committee of ISDE. Also he is known in Russia as a popular geospatial blogger.

Eugene Eremchenko belivies that CODATA’s philosophy is very similar to the ideas behind the ‘geospatial revolution’. The new era of science and technology requires a new concept of scientific data. Developing of this new concept and the sharing of CODATA’s vision will be the focus of the scientific activities of Eugene Eremchenko in the future.