Library

Permanent URI for this communityhttps://scholarworks.montana.edu/handle/1/318

Montana State University Library (MSU Library) is the academic library of Montana State University, Montana's land-grant university, in Bozeman, Montana, United States. It is the flagship library for all of Montana State University System's campuses. In 1978, the library was named the Roland R. Renne Library to honor the sixth president of the university. The library supports the research and information needs of Montana's students, faculty, and the Montana Extension Service.

Browse

Search Results

Now showing 1 - 10 of 11
  • Thumbnail Image
    Item
    Quantifying Scientific Jargon
    (SAGE Publications, 2020-07) Willoughby, Shannon D.; Johnson, Keith; Sterman, Leila B.
    When scientists disseminate their work to the general public, excessive use of jargon should be avoided because if too much technical language is used, the message is not effectively conveyed. However, determining which words are jargon and how much jargon is too much is a difficult task, partly because it can be challenging to know which terms the general public knows, and partly that it can be challenging to ensure scientific accuracy while avoiding esoteric terminology. To help address this issue, we have written an R script that an author can use to quantify the amount of scientific jargon in any written piece and make appropriate edits based on the target audience.
  • Thumbnail Image
    Item
    Discovery and Reuse of Open Datasets: An Exploratory Study
    (Journal of eScience Librarianship, 2016-07) Mannheimer, Sara; Sterman, Leila B.; Borda, Susan
    Objective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.
  • Thumbnail Image
    Item
    Citations as Data: Harvesting the Scholarly Record of Your University to Enrich Institutional Knowledge and Support Research
    (2017-11) Sterman, Leila B.; Clark, Jason A.
    Many research libraries are looking for new ways to demonstrate value for their parent institutions. Metrics, assessment, and promotion of research continue to grow in importance, but they have not always fallen into the scope of services for the research library. Montana State University (MSU) Library recognized a need and interest to quantify the citation record and scholarly output of our university. With this vision in mind, we began positioning citation collection as the data engine that drives scholarly communication, deposits into our IR, and assessment of research activities. We envisioned a project that might: provide transparency around the acts of scholarship at our university; celebrate the research we produce; and build new relationships between our researchers. The result was our MSU Research Citation application (https://arc.lib.montana.edu/msu-researchcitations/) and our research publication promotion service (www.montana.edu/research/publications/). The application and accompanying services are predicated on the principle that each citation is a discrete data object that can be searched, browsed, exported, and reused. In this formulation, the records of our research publications are the data that can open up possibilities for new library projects and services.
  • Thumbnail Image
    Item
    The enemy of the good
    (2017-08) Sterman, Leila B.
    Green open access, the subsection of open access in which no additional money changes hands and a version of a paper is posted online, is the most financially available means of providing broad access to research for many authors and consumes a great amount of librarian time. The most common format of green open access is the deposit of postprints, versions of papers that have been through peer-review but often not copyediting or journal layout and typesetting. Journal publishers allow these versions to be posted with restrictions, based on an understanding that scholars will seek the version of record and cite that work in any future publication. Therefore, the secondary versions do not impede the most valuable metric of journal publication--citations--and do not impact subscriptions, as discovery happens at an individual level and purchasing at an institutional level. Here, Sterman discusses how specifics in publisher\'s green OA policies are bogging down IR deposits of scholarly literature.
  • Thumbnail Image
    Item
    RAMP - The Repository Analytics and Metrics Portal: A prototype Web service that accurately counts item downloads from institutional repositories
    (2016-11) OBrien, Patrick; Arlitsch, Kenning; Mixter, Jeff; Wheeler, Jonathan; Sterman, Leila B.
    Purpose – The purpose of this paper is to present data that begin to detail the deficiencies of log file analytics reporting methods that are commonly built into institutional repository (IR) platforms. The authors propose a new method for collecting and reporting IR item download metrics. This paper introduces a web service prototype that captures activity that current analytics methods are likely to either miss or over-report. Design/methodology/approach – Data were extracted from DSpace Solr logs of an IR and were cross-referenced with Google Analytics and Google Search Console data to directly compare Citable Content Downloads recorded by each method. Findings – This study provides evidence that log file analytics data appear to grossly over-report due to traffic from robots that are difficult to identify and screen. The study also introduces a proof-of-concept prototype that makes the research method easily accessible to IR managers who seek accurate counts of Citable Content Downloads. Research limitations/implications – The method described in this paper does not account for direct access to Citable Content Downloads that originate outside Google Search properties. Originality/value – This paper proposes that IR managers adopt a new reporting framework that classifies IR page views and download activity into three categories that communicate metrics about user activity related to the research process. It also proposes that IR managers rely on a hybrid of existing Google Services to improve reporting of Citable Content Downloads and offers a prototype web service where IR managers can test results for their repositories.
  • Thumbnail Image
    Item
    Undercounting File Downloads from Institutional Repositories
    (Emerald, 2016-10) OBrien, Patrick; Arlitsch, Kenning; Sterman, Leila B.; Mixter, Jeff; Wheeler, Jonathan; Borda, Susan
    A primary impact metric for institutional repositories (IR) is the number of file downloads, which are commonly measured through third-party web analytics software. Google Analytics, a free service used by most academic libraries, relies on HTML page tagging to log visitor activity on Google’s servers. However, web aggregators such as Google Scholar link directly to high value content (usually PDF files), bypassing the HTML page and failing to register these direct access events. This paper presents evidence of a study of four institutions demonstrating that the majority of IR activity is not counted by page tagging web analytics software, and proposes a practical solution for significantly improving the reporting relevancy and accuracy of IR performance metrics using Google Analytics.
  • Thumbnail Image
    Item
    Data set supporting study on Undercounting File Downloads from Institutional Repositories [dataset]
    (Montana State University ScholarWorks, 2016-07) OBrien, Patrick; Arlitsch, Kenning; Sterman, Leila B.; Mixter, Jeff; Wheeler, Jonathan; Borda, Susan
    This dataset supports the study published as “Undercounting File Downloads from IR”. The following items are included: 1. gaEvent.zip = PDF exports of Google Analytics Events reports for each IR. 2. gaItemSummaryPageViews.zip = PDF exports of Google Analytics Item Summary Page Views reports. Also, included is a Text file containing the Regular Expressions used to generate each report’s Advanced Filter. 3. gaSourceSessions.zip = PDF exports of Google Analytics Referral reports to determine the percentage of referral traffic from Google Scholar. Note: does not include Utah due to issues with the structure of Utah’s IR and configuration of their Google Analytics. 4. irDataUnderCount.tsv.zip – TSV file of complete Google Search Console data set containing the 57,087 unique URLs in 413,786 records. 5. irDataUnderCountCiteContentDownloards.tsv.zip = TSV of the Google Search Console records containing the Citable Content Download records that were not counted in google Analytics.
  • Thumbnail Image
    Item
    Cited/Downloaded Dataset and Repository Characteristics [dataset]
    (Montana State University ScholarWorks, 2016-02) Mannheimer, Sara; Borda, Susan; Sterman, Leila B.
    This rubric documents the characteristics of high-use datasets and their repositories, with “high-use” defined as either highly cited in Thomson Reuters' Data Citation Index or highly downloaded in an institutional repository. The authors reviewed publicly-available information on repository websites and inputted our observations into the rubric. The rubric addresses six major characteristics of high-use datasets and their repositories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description.
  • Thumbnail Image
    Item
    Montana State University Research Data Census Instrument, Version 1
    (2015-01) Arlitsch, Kenning; Clark, Jason A.; Hager, Ben; Heetderks, Thomas; Llovet, Pol; Mannheimer, Sara; Mazurie, Aurélien J.; Sheehan, Jerry; Sterman, Leila B.
    Montana State University developed the Research Data Census (RDC) to engage our local research community in an interactive dialogue about their data. The research team was particularly interested in learning more about the following issues at Montana State: the size of research data; the role the local and wide area network play in accessing and sharing resources; data sharing behaviors; interest in existing services that assist with the curation, storage, and publication of scientific data discoveries.
  • Thumbnail Image
    Item
    Final Performance Report Narrative: Getting Found
    (2014-11) Arlitsch, Kenning; OBrien, Patrick; Godby, Jean; Mixter, Jeff; Clark, Jason A.; Young, Scott W. H.; Smith, Devon; Rossmann, Doralyn; Sterman, Leila B.; Tate, Angela; Hansen, Mary Anne
    The research we proposed to IMLS in 2011 was prompted by a realization that the digital library at the University of Utah was suffering from low visitation and use. We knew that we had a problem with low visibility on the Web because search engines such as Google were not harvesting and indexing our digitized objects, but we had only a limited understanding of the reasons. We had also done enough quantitative surveys of other digital libraries to know that many libraries were suffering from this problem. IMLS funding helped us understand the reasons why library digital repositories weren’t being harvested and indexed. Thanks to IMLS funding of considerable research and application of better practices we were able to dramatically improve the indexing ratios of Utah’s digital objects in Google, and consequently the numbers of visitors to the digital collections increased. In presentations and publications we shared the practices that led to our accomplishments at Utah. The first year of the grant focused on what the research team has come to call “traditional search engine optimization,” and most of this work was carried out at the University of Utah. The final two years of the grant were conducted at Montana State University after the PI was appointed as dean of the library there. These latter two years moved more toward “Semantic Web optimization,” which includes areas of research in semantic identity, data modeling, analytics and social media optimization
Copyright (c) 2002-2022, LYRASIS. All rights reserved.