Library

Permanent URI for this communityhttps://scholarworks.montana.edu/handle/1/318

Montana State University Library (MSU Library) is the academic library of Montana State University, Montana's land-grant university, in Bozeman, Montana, United States. It is the flagship library for all of Montana State University System's campuses. In 1978, the library was named the Roland R. Renne Library to honor the sixth president of the university. The library supports the research and information needs of Montana's students, faculty, and the Montana Extension Service.

Browse

Search Results

Now showing 1 - 5 of 5
  • Thumbnail Image
    Item
    An analysis of use and performance data aggregated from 35 institutional repositories
    (2020-11) Arlitsch, Kenning; Wheeler, Jonathan; Pham, Minh Thi Ngoc; Parulian, Nikolaus Nova
    Purpose This study demonstrates that aggregated data from the Repository Analytics and Metrics Portal (RAMP) have significant potential to analyze visibility and use of institutional repositories (IR) as well as potential factors affecting their use, including repository size, platform, content, device and global location. The RAMP dataset is unique and public. Design/methodology/approach The webometrics methodology was followed to aggregate and analyze use and performance data from 35 institutional repositories in seven countries that were registered with the RAMP for a five-month period in 2019. The RAMP aggregates Google Search Console (GSC) data to show IR items that surfaced in search results from all Google properties. Findings The analyses demonstrate large performance variances across IR as well as low overall use. The findings also show that device use affects search behavior, that different content types such as electronic thesis and dissertation (ETD) may affect use and that searches originating in the Global South show much higher use of mobile devices than in the Global North. Research limitations/implications The RAMP relies on GSC as its sole data source, resulting in somewhat conservative overall numbers. However, the data are also expected to be as robot free as can be hoped. Originality/value This may be the first analysis of aggregate use and performance data derived from a global set of IR, using an openly published dataset. RAMP data offer significant research potential with regard to quantifying and characterizing variances in the discoverability and use of IR content.
  • Thumbnail Image
    Item
    Data-Driven Improvement to Institutional Repository Discoverability and Use
    (Institute of Museum and Library Services, 2018-09) Arlitsch, Kenning; Kahanda, Indika; OBrien, Patrick; Shanks, Justin D.; Wheeler, Jonathan
    The Montana State University (MSU) Library, in partnership with the MSU School of Computing, the University of New Mexico Library and DuraSpace, seeks a $49,998 Planning Grant from the Institute of Museum and Library Services through its National Leadership Grant program under its National Digital Platform project category to develop a sustainability plan for the Repositories Analytics & Metrics Portal that will keep its dataset open and available to all researchers. The proposal also includes developing a preliminary institutional repositories (IR) reporting model; a search engine optimization (SEO) audit and remediation plan for IR; and exploring whether machine learning can improve the quality of IR content metadata.The project team expects work conducted in this planning grant to make the case for advanced research projects that will be high-impact and worthy of funding.
  • Thumbnail Image
    Item
    RAMP - The Repository Analytics and Metrics Portal: A prototype Web service that accurately counts item downloads from institutional repositories
    (2016-11) OBrien, Patrick; Arlitsch, Kenning; Mixter, Jeff; Wheeler, Jonathan; Sterman, Leila B.
    Purpose – The purpose of this paper is to present data that begin to detail the deficiencies of log file analytics reporting methods that are commonly built into institutional repository (IR) platforms. The authors propose a new method for collecting and reporting IR item download metrics. This paper introduces a web service prototype that captures activity that current analytics methods are likely to either miss or over-report. Design/methodology/approach – Data were extracted from DSpace Solr logs of an IR and were cross-referenced with Google Analytics and Google Search Console data to directly compare Citable Content Downloads recorded by each method. Findings – This study provides evidence that log file analytics data appear to grossly over-report due to traffic from robots that are difficult to identify and screen. The study also introduces a proof-of-concept prototype that makes the research method easily accessible to IR managers who seek accurate counts of Citable Content Downloads. Research limitations/implications – The method described in this paper does not account for direct access to Citable Content Downloads that originate outside Google Search properties. Originality/value – This paper proposes that IR managers adopt a new reporting framework that classifies IR page views and download activity into three categories that communicate metrics about user activity related to the research process. It also proposes that IR managers rely on a hybrid of existing Google Services to improve reporting of Citable Content Downloads and offers a prototype web service where IR managers can test results for their repositories.
  • Thumbnail Image
    Item
    Undercounting File Downloads from Institutional Repositories
    (Emerald, 2016-10) OBrien, Patrick; Arlitsch, Kenning; Sterman, Leila B.; Mixter, Jeff; Wheeler, Jonathan; Borda, Susan
    A primary impact metric for institutional repositories (IR) is the number of file downloads, which are commonly measured through third-party web analytics software. Google Analytics, a free service used by most academic libraries, relies on HTML page tagging to log visitor activity on Google’s servers. However, web aggregators such as Google Scholar link directly to high value content (usually PDF files), bypassing the HTML page and failing to register these direct access events. This paper presents evidence of a study of four institutions demonstrating that the majority of IR activity is not counted by page tagging web analytics software, and proposes a practical solution for significantly improving the reporting relevancy and accuracy of IR performance metrics using Google Analytics.
  • Thumbnail Image
    Item
    Data set supporting study on Undercounting File Downloads from Institutional Repositories [dataset]
    (Montana State University ScholarWorks, 2016-07) OBrien, Patrick; Arlitsch, Kenning; Sterman, Leila B.; Mixter, Jeff; Wheeler, Jonathan; Borda, Susan
    This dataset supports the study published as “Undercounting File Downloads from IR”. The following items are included: 1. gaEvent.zip = PDF exports of Google Analytics Events reports for each IR. 2. gaItemSummaryPageViews.zip = PDF exports of Google Analytics Item Summary Page Views reports. Also, included is a Text file containing the Regular Expressions used to generate each report’s Advanced Filter. 3. gaSourceSessions.zip = PDF exports of Google Analytics Referral reports to determine the percentage of referral traffic from Google Scholar. Note: does not include Utah due to issues with the structure of Utah’s IR and configuration of their Google Analytics. 4. irDataUnderCount.tsv.zip – TSV file of complete Google Search Console data set containing the 57,087 unique URLs in 413,786 records. 5. irDataUnderCountCiteContentDownloards.tsv.zip = TSV of the Google Search Console records containing the Citable Content Download records that were not counted in google Analytics.
Copyright (c) 2002-2022, LYRASIS. All rights reserved.