January 6th, 2013
Today I will be co-convening a session with Ray Siemens at the Modern Language Association’s annual conference on the topic of using and adapting research methods typically associated with social sciences to research in the humanities. Our panelists include colleagues Eric Meyer (OII), Lindsay Thomas and Dana Solomon (UC Santa Barbara), James Kelley (Mississippi), and Lynne Siemens (University of Victoria).
Holders of digital collections are increasingly asked to demonstrate their impact as they seek additional resources for maintenance and growth, but their training as experts in the humanities content of the collections has not often included the social science and measurement skills needed to understand uses. Technological advances in the humanities have led to a re-envisioning and re-interpreting of traditional approaches and presents an opportunity to enlarge and extend methodological practice with the inclusion of new disciplinary approaches (see Siemens 2010). However, researchers using social science methods must move beyond their disciplinary training into “unmarked” territory, often with limited disciplinary support or guidance (see Collins, Bulger, and Meyer 2012). Some literary scholars are employing social science methodologies such as interviews, ethnographies, surveys, and statistical data analysis in their research (some examples include: Galey and Ruecker 2010; Siemens et al. 2009; Hoover 2008). Despite increasing need and expectation for studies of use and users, few humanities programs provide support or training in this area. Our interdisciplinary panel draws on the expertise of literary scholars and social scientists to explore several strategies that can support the adoption of social science methodology in literary studies and other humanities research.
(1) First, Social Scientists themselves can provide examples of how these research methodologies might be employed within the study of the Humanities by participating directly in this research and presenting at a variety of disciplinary-oriented conferences. In a series of projects starting in 2008, Eric Meyer and his colleagues found that understanding users and uses is increasingly important in the digital humanities as research becomes more dependent on shared digital resources. As part of our roundtable discussion, Meyer will describe the online toolkit developed in response to these findings that allows non-experts to use quantitative and qualitative tools to understand the kinds of impacts digital humanities materials are having. The toolkit includes tools that range from focus groups and interviews to webometrics, log analysis, surveys, and Twitter tracking. Meyer will briefly demonstrate the toolkit, but will mainly focus on the case studies done by humanities experts, and on their reflections about the process. Emerging from discussion of these case studies will be Meyer’s key argument, that training and support for domain experts in the humanities to use social science research methods can be more powerful than bringing in external social science or computing specialists who may understand measurement, but are less likely to understand the context of the resources and the communities who rely on them.
(2) Lynne Siemens will argue that a second strategy to support adoption of social science methodology involves creating opportunities for discussion and collaboration between Social Scientists and Humanists through development of online resources and involvement in interdisciplinary centres (For example, see Centre for Research in the Arts 2011; King’s College London 2012; Digital Humanities Center for Japanese Arts and Cultures 2008). Siemens will discuss how these relationships increase the likelihood for those “accidental meetings” and planned sessions which serve as a foundation for interdisciplinary innovation, collaboration and knowledge sharing (Cech and Rubin 2004). As University of Victoria’s ETCL has found, associated Social Scientists can provide mentorship and guidance to students and researchers in the adoption of this methodology, particularly around the development of interview scripts, surveys and ethics applications (ETCL nd-b).
(3) Given that researchers learn appropriate disciplinary methodology during graduate training, which is later reinforced through institutional rewards and recognition policies (Bruhn 2000; Gold and Gold 1985), a third strategy must be to create opportunities that allow Humanities scholars to explore Social Science methodology. Dana Solomon and Lindsay Thomas, current doctoral candidates at the University of California, Santa Barbara, will describe their experience in performing usability studies of the Research- oriented Social Environment (RoSE) is a system that encourages users to seek out relationships between authors, works, and commentators–living and dead–as part of a social network of knowledge (funded by a National Endowment for the Humanities Digital Humanities Start-Up Grant, and directed by Professor Alan Liu). Solomon and Thomas will describe their process of examining how RoSE users interact with other people and documents in and through the system, providing an account of their experience applying social science research methods to study users and uses.
(4) James Kelley will further examine the process of learning and applying social science methods to literary research. Kelley applied grounded theory, a method widely used in qualitative research in the social sciences, to explore what teachers generally say and do not say when talking in online forums with students and with each other about assigned novels. He will describe his method for analyzing over 5,466 posts and conclude by addressing the additional challenge of explaining the value and practical methods of grounded theory to audiences who are largely unfamiliar with it.
Share on Facebook
December 3rd, 2012
Today I’m giving a lecture about learning environments that promote interdisciplinary dialogue in Internet Science. After 10+ years working in an interdisciplinary space, I take for granted how much easier it has become, I forget the many times I sat through lectures that were like a foreign language where every third word made sense. I also forget how difficult it was to start talking to people in other disciplines, because graduate students already had their cohort, faculty their students, so showing up and not speaking their language meant it took time to be part of their conversations.
During the summer, I convened, along with Cristobal Cobo, Tim Davies, and Ian Brown, a summer school as part of the Network of Excellence in Internet Science. It was a week-long program for early career researchers to engage in interdisciplinary discussions. The topic was privacy, trust, & reputation and each morning, two invited lecturers would discuss these topics from their disciplinary perspectives. We invited faculty from computer science to discuss the technical dimensions of privacy, law professors to explain what is and is not feasible to regulate, and an educational researcher who asks students to draw pictures of who they think connects to them on the Internet. This approach did not seem to ruffle any feathers. However, the afternoon sessions were a more challenging sell.
We blocked out the afternoons for interdisciplinary discussion. I modeled the discussions after seminars offered by UCSB’s CITS PhD emphasis, in which we select a topic, for example “reading,” “trust,” “mobile phones” and graduate students each present research questions and methods their discipline would use to approach the topic. For the summer school, we had to convince colleagues at each stage that it was definitely a good idea to give the participants three hours per day for interdisciplinary discussion. After the first day, we had to convince the participants it was a good idea. By the third day, they were busy talking to each other.
There are some questions that cannot be answered by a single discipline. This challenge existed before the Internet (a recent example provided by Radiolab: why isn’t blue mentioned in The Odyssey and The Iliad?), but like information, communication, even love, the Internet magnifies and accelerates it. Disciplinary silos do not serve Internet research well and really, how could they? How can we approach any experience on the Internet without considering the technical backbone behind it? How can technologies be developed without considering user needs and behaviors? How can we understand the Internet from a purely scientific view without considering the art that makes it work, that people continue to find new ways of using it, that these uses continue to surprise and challenge, that technologies tend to serve other purposes than they were initially intended and that why this happens is worth studying.
Much Internet research has spanned disciplinary boundaries and enabled us to better understand
–why and how people organize online and off,
–which groups are excluded from fully participating in the Internet and why,
–the challenges of protecting personal information, why it might matter and why it might not,
–how knowledge is developed and shared, individually and institutionally
I was motivated to continue attending seminars and lectures outside my field because I had a question I couldn’t answer…I wanted a way to measure students’ online research practices, to understand why they selected certain sources and how they used them. Despite how initially uncomfortable it was for an Education student with no programming background to attend courses in Computer Science, or with a limited quantitative research background to attend Cognitive Psychology courses, I was motivated to keep trying.
Why not interdisciplinary?
As part of the summer school, we asked students what barriers prevented them from pursuing interdisciplinary work. Some said they didn’t really have an interest in working with other disciplines, or didn’t know anyone to collaborate with in their area of interest. Others said that funding was more of a challenge when working across disciplines, or they had experienced difficulty in being accepted to conferences or having articles accepted for publication because their interdisciplinary work did not fit. Others said there was no professional incentive to collaborate because the journals and conferences they would submit to were not viewed as prestigious within their departments. The responses were similar to those I heard in informal conversations as a graduate student. These are fair points. Interdisciplinary work is still in its infancy and despite efforts by AoIR and the increasing number of interdisciplinary journals, publications and professional recognition still seem to be exceptional. However, as demonstrated by initiatives in the digital humanities, in the small, but growing number of interdisciplinary departments, institutions can slowly change the paradigm of recognition.
Where to start?
NSF Interactive Digital Multimedia IGERT
During my graduate years at UCSB, several interdisciplinary opportunities were emerging. The National Science Foundation funded students through their IGERT program to work in interdisciplinary groups to develop projects. IGERT students were drawn from across campus, including computer science, cognitive science, art, and geography. All students were awarded tuition plus a stipend and travel funds and were given a collaborative workspace. The program required that students work in small groups with other IGERT students and attend a Friday seminar on interdisciplinary topics related to their research.
In my experience, while funding was abundant, faculty support never materialized. The Friday lectures always seemed to be on very specialized topics, without an organizing theme. Student projects were generally based on the PIs interest, rather than generated by the students and those not in engineering did not receive as much mentorship. Yet the projects afforded an opportunity to discuss disciplinary approaches and discover complementary questions and methods. Unfortunately, many students encountered difficulties in publishing their interdisciplinary work or being accepted to conferences, so prioritized their disciplinary work.
Cognitive Science PhD emphasis
Based on interest from engineers studying artificial intelligence and geographers trying to get a better sense of how people approach maps, the Department of Psychology started offering a seminar series that combined a few quarterly lectures on interdisciplinary topics with a weekly student seminar. The seminar was particularly targeted toward students and faculty outside of Psychology, so provided relevant background information and an overview of methods at the start of each quarter. Faculty were invited from across campus and usually brought interested graduate students.
The Cognitive Science emphasis allowed graduate students from outside Psychology to take coursework and receive an emphasis on their diploma in Cognitive Science. In addition to the coursework, participants were required to have two members of the interdisciplinary emphasis on their dissertation committee, to complete a research paper or proposal in Cognitive Science, and for cognitive science to be a central focus in their dissertation.
Also on Friday afternoons, the course discussion was lively, often resulting in long disagreements over disciplinary assumptions. Normally, students continued the discussion over drinks or dinner, so the emphasis also succeeded in creating an interdisciplinary community.
Center for Information Technology & Society PhD emphasis
Core faculty from Political Science, Computer Science, Film Studies, Sociology, and English started a lecture series around 1999 that grew into a graduate seminar about four years later. The strength of the seminar was that a core group of faculty was always present, sometimes outnumbering the students, to discuss their discipline’s approach to whatever was the topic of discussion. As mentioned above, early seminars were organized around a theme, for example, mobile phones, and each week two graduate students and sometimes faculty would suggest readings to the group and discuss research questions they would ask around the topic. The faculty modeled respectful dialogue, but pushed each other and the students to challenge their disciplinary assumptions.
The graduate seminar series also evolved into a PhD emphasis, allowing students to receive recognition for coursework completed in the area of Technology & Society. Similar to the Cognitive Science emphasis, the Technology & Society emphasis drew an interdisciplinary faculty steering committee from across campus. In addition to the seminar, coursework included courses in two areas: Culture & History and Society & Behavior. Courses were offered through several departments including Anthropology, Environmental Science and Management, History, Education, and Communication (in addition to the disciplines listed at the beginning of this section). Students were required to have a faculty member from the steering committee be part of their dissertation committee and to complete a dissertation that related to Technology & Society.
The program modeled interdisciplinary dialogue and provided opportunities for students to work on research projects with faculty and students from other departments. In fact a few faculty received grants specifically to foster interdisciplinary collaboration and created strong cohort relations through these research opportunities.
[stay tuned...more to come]
Share on Facebook
April 25th, 2012
Funding bodies are increasingly requiring evidence of impact for higher education efforts in outreach and public engagement, yet measuring this impact is challenging. A review of current practice combined with interviews of public engagement experts in the UK underscored the degree to which outcomes of public engagement and outreach efforts are often not immediately visible, but rather diffuse and developed over time.
This challenge in measuring impact was a main point of conversation on Friday when the OII hosted the judging panel for our first-time European Competition for Best Innovations in University Outreach and Public Engagement, supported by the European Commission as part of the ULab project. The panel consisted of experts in public engagement from the ULab partner countries. Judges represented a range of backgrounds, including funding officers, policymakers, and those engaged in award-winning outreach.
When reporting impact, the majority of entries cited audience size, whether measured by attendance, web traffic, web clicks, or distribution of printed materials. Yet, this reporting usually did not directly measure progress on the stated aims of the projects. For example, if a project aims to change perceptions, or increase engagement with science, simple attendance at a fair or visits to a website cannot show whether these objectives were met. Tracking whether the person then attends another science event, enrolls in a science class, or undertakes a degree in science may provide a stronger measure, but still the extent of influence of a particular event is unknown. Of course, the problem is that indicators of input are more easily and economically gathered, than of impact, which are costly and an additional activity, which may not yield definitive answers.
However, some entries used participant surveys to gather more details about impact and experience. These ranged in depth from like/dislike buttons to short-answer questions about learning. Some projects engaged their participants in focus group discussions aimed at improving their offerings and better meeting the needs of their target audience. A few looked at impacts over time, showing shifts in community engagement, or increases in student enrollment.
The question of depth versus breadth of impact was debated among our judging panel, showing that even experts have diverse opinions of what constitutes models of good practice in public engagement. We found that entries from a number of European countries focused on the numbers of people reached as a measure of success. For example, events such as the Researcher’s Nights hosted throughout Europe aim to engage large numbers of community members in short events and therefore measure success by attendance levels. By way of contrast, several entries from the UK place an emphasis on depth of engagement. In other words, a project may have less than 20 participants, but have weeks or months of interaction that leads to a change in behavior, for example, a demonstrated interest in science, or a better understanding of a different social perspective. Some judges placed more emphasis on breadth, while others focused on depth.
Overall, even in top-ranked entries, the judges found that impact measures were not always ideal or fully developed and recommended that the projects should consider developing more evidence of impact. One of the winning entries, Active Science, demonstrated a multi-faceted approach to impact measures. In addition to reporting on audience attendance, the programme also demonstrated its success through tracing its expansion from a local to nation-wide project. To measure whether the project was successful in shifting perceptions about science, participants were given pre- and post-activity questionnaires that measured their attitudinal approaches.
Matching clearly defined objectives to appropriate measures may improve reporting and evaluation of impact. The wealth of activities and approaches reported in the EngageU entries provide a repository for strong examples of impact measures as well as a means of comparisons for how projects with similar objectives measure their impact.
Of course, you may have very different views on the centrality of impact assessment for outreach. We would appreciate hearing from you.
Share on Facebook
April 12th, 2011
Eleven years after Brown & Duguid (2000) released their Social Life of Information, we find that even in humanities, a field that typically conjures an image of a lone scholar toiling in dusty archives, the process of research is very much a social endeavor. Last week, in collaboration with the Research Information Network, we released Reinventing Research? Information Practices in the Humanities, a study of 54 humanities scholars across disciplines such as history, English, and philosophy in 25 institutions in 5 countries. Through interviews, focus group discussions, and web history logs, we examined their use of information, dissemination practices, and collaborative activities.
The scholars we interviewed described the tradition of collaboration within their respective disciplines. Unlike the sciences, in which research frequently involves large teams and multi-authored articles, collaboration in the humanities is more nuanced. One of our case studies, The Digital Republic of Letters, traces correspondences during the Enlightenment. These correspondences include letters from Descartes, Van Gogh, and Grotius, among others. The centuries-old collaboration methods examined by this group underlie current practice. Then, letters sent back and forth reported, unpacked, tested, and developed theories. Sound familiar? The description could easily be applied to e-mail, seminars, conference presentations, or hallway discussions. Research then and now begins with the sharing of ideas.
While not overtly collaborative in the scientific practice of the term, humanities scholars engage in research that “is done in conversation.” In addition to the above examples, scholars engage this conversation through their work in archives, when they prepare materials to be digitally accessed, when they report on rare materials, making previously obscure knowledge available to a larger public. They support each other in their work by talking through ideas and texts, presenting preliminary ideas that later become papers or monographs. Primarily, their research practices are source-intensive, but the sense-making process is very much accomplished in community.
Share on Facebook