Publications by Colleges and Departments (MSU - Bozeman)
Permanent URI for this communityhttps://scholarworks.montana.edu/handle/1/3
Browse
12 results
Search Results
Item How should we train and assess our STEM graduate students in oral communication?(2020-01) Willoughby, Shannon D.; Davis, Kent; Green, Jennifer; Hughes, Bryce; LaMeres, Brock; Sterman, Leila B.A poster presented at a IGE annual PI's reporting meeting in January 2020. Today’s STEM graduate students need to be able to effectively communicate their research with the public. How do we develop and assess a curriculum that fosters these skills in tomorrow’s science professionals?Item Quantifying Scientific Jargon(SAGE Publications, 2020-07) Willoughby, Shannon D.; Johnson, Keith; Sterman, Leila B.When scientists disseminate their work to the general public, excessive use of jargon should be avoided because if too much technical language is used, the message is not effectively conveyed. However, determining which words are jargon and how much jargon is too much is a difficult task, partly because it can be challenging to know which terms the general public knows, and partly that it can be challenging to ensure scientific accuracy while avoiding esoteric terminology. To help address this issue, we have written an R script that an author can use to quantify the amount of scientific jargon in any written piece and make appropriate edits based on the target audience.Item Discovery and Reuse of Open Datasets: An Exploratory Study(Journal of eScience Librarianship, 2016-07) Mannheimer, Sara; Sterman, Leila B.; Borda, SusanObjective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.Item Citations as Data: Harvesting the Scholarly Record of Your University to Enrich Institutional Knowledge and Support Research(2017-11) Sterman, Leila B.; Clark, Jason A.Many research libraries are looking for new ways to demonstrate value for their parent institutions. Metrics, assessment, and promotion of research continue to grow in importance, but they have not always fallen into the scope of services for the research library. Montana State University (MSU) Library recognized a need and interest to quantify the citation record and scholarly output of our university. With this vision in mind, we began positioning citation collection as the data engine that drives scholarly communication, deposits into our IR, and assessment of research activities. We envisioned a project that might: provide transparency around the acts of scholarship at our university; celebrate the research we produce; and build new relationships between our researchers. The result was our MSU Research Citation application (https://arc.lib.montana.edu/msu-researchcitations/) and our research publication promotion service (www.montana.edu/research/publications/). The application and accompanying services are predicated on the principle that each citation is a discrete data object that can be searched, browsed, exported, and reused. In this formulation, the records of our research publications are the data that can open up possibilities for new library projects and services.Item The enemy of the good(2017-08) Sterman, Leila B.Green open access, the subsection of open access in which no additional money changes hands and a version of a paper is posted online, is the most financially available means of providing broad access to research for many authors and consumes a great amount of librarian time. The most common format of green open access is the deposit of postprints, versions of papers that have been through peer-review but often not copyediting or journal layout and typesetting. Journal publishers allow these versions to be posted with restrictions, based on an understanding that scholars will seek the version of record and cite that work in any future publication. Therefore, the secondary versions do not impede the most valuable metric of journal publication--citations--and do not impact subscriptions, as discovery happens at an individual level and purchasing at an institutional level. Here, Sterman discusses how specifics in publisher\'s green OA policies are bogging down IR deposits of scholarly literature.Item RAMP - The Repository Analytics and Metrics Portal: A prototype Web service that accurately counts item downloads from institutional repositories(2016-11) OBrien, Patrick; Arlitsch, Kenning; Mixter, Jeff; Wheeler, Jonathan; Sterman, Leila B.Purpose – The purpose of this paper is to present data that begin to detail the deficiencies of log file analytics reporting methods that are commonly built into institutional repository (IR) platforms. The authors propose a new method for collecting and reporting IR item download metrics. This paper introduces a web service prototype that captures activity that current analytics methods are likely to either miss or over-report. Design/methodology/approach – Data were extracted from DSpace Solr logs of an IR and were cross-referenced with Google Analytics and Google Search Console data to directly compare Citable Content Downloads recorded by each method. Findings – This study provides evidence that log file analytics data appear to grossly over-report due to traffic from robots that are difficult to identify and screen. The study also introduces a proof-of-concept prototype that makes the research method easily accessible to IR managers who seek accurate counts of Citable Content Downloads. Research limitations/implications – The method described in this paper does not account for direct access to Citable Content Downloads that originate outside Google Search properties. Originality/value – This paper proposes that IR managers adopt a new reporting framework that classifies IR page views and download activity into three categories that communicate metrics about user activity related to the research process. It also proposes that IR managers rely on a hybrid of existing Google Services to improve reporting of Citable Content Downloads and offers a prototype web service where IR managers can test results for their repositories.Item Undercounting File Downloads from Institutional Repositories(Emerald, 2016-10) OBrien, Patrick; Arlitsch, Kenning; Sterman, Leila B.; Mixter, Jeff; Wheeler, Jonathan; Borda, SusanA primary impact metric for institutional repositories (IR) is the number of file downloads, which are commonly measured through third-party web analytics software. Google Analytics, a free service used by most academic libraries, relies on HTML page tagging to log visitor activity on Google’s servers. However, web aggregators such as Google Scholar link directly to high value content (usually PDF files), bypassing the HTML page and failing to register these direct access events. This paper presents evidence of a study of four institutions demonstrating that the majority of IR activity is not counted by page tagging web analytics software, and proposes a practical solution for significantly improving the reporting relevancy and accuracy of IR performance metrics using Google Analytics.Item Data set supporting study on Undercounting File Downloads from Institutional Repositories [dataset](Montana State University ScholarWorks, 2016-07) OBrien, Patrick; Arlitsch, Kenning; Sterman, Leila B.; Mixter, Jeff; Wheeler, Jonathan; Borda, SusanThis dataset supports the study published as “Undercounting File Downloads from IR”. The following items are included: 1. gaEvent.zip = PDF exports of Google Analytics Events reports for each IR. 2. gaItemSummaryPageViews.zip = PDF exports of Google Analytics Item Summary Page Views reports. Also, included is a Text file containing the Regular Expressions used to generate each report’s Advanced Filter. 3. gaSourceSessions.zip = PDF exports of Google Analytics Referral reports to determine the percentage of referral traffic from Google Scholar. Note: does not include Utah due to issues with the structure of Utah’s IR and configuration of their Google Analytics. 4. irDataUnderCount.tsv.zip – TSV file of complete Google Search Console data set containing the 57,087 unique URLs in 413,786 records. 5. irDataUnderCountCiteContentDownloards.tsv.zip = TSV of the Google Search Console records containing the Citable Content Download records that were not counted in google Analytics.Item Cited/Downloaded Dataset and Repository Characteristics [dataset](Montana State University ScholarWorks, 2016-02) Mannheimer, Sara; Borda, Susan; Sterman, Leila B.This rubric documents the characteristics of high-use datasets and their repositories, with “high-use” defined as either highly cited in Thomson Reuters' Data Citation Index or highly downloaded in an institutional repository. The authors reviewed publicly-available information on repository websites and inputted our observations into the rubric. The rubric addresses six major characteristics of high-use datasets and their repositories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description.Item Montana State University Research Data Census Instrument, Version 1(2015-01) Arlitsch, Kenning; Clark, Jason A.; Hager, Ben; Heetderks, Thomas; Llovet, Pol; Mannheimer, Sara; Mazurie, Aurélien J.; Sheehan, Jerry; Sterman, Leila B.Montana State University developed the Research Data Census (RDC) to engage our local research community in an interactive dialogue about their data. The research team was particularly interested in learning more about the following issues at Montana State: the size of research data; the role the local and wide area network play in accessing and sharing resources; data sharing behaviors; interest in existing services that assist with the curation, storage, and publication of scientific data discoveries.