January 6th, 2013
Today I will be co-convening a session with Ray Siemens at the Modern Language Association’s annual conference on the topic of using and adapting research methods typically associated with social sciences to research in the humanities. Our panelists include colleagues Eric Meyer (OII), Lindsay Thomas and Dana Solomon (UC Santa Barbara), James Kelley (Mississippi), and Lynne Siemens (University of Victoria).
Holders of digital collections are increasingly asked to demonstrate their impact as they seek additional resources for maintenance and growth, but their training as experts in the humanities content of the collections has not often included the social science and measurement skills needed to understand uses. Technological advances in the humanities have led to a re-envisioning and re-interpreting of traditional approaches and presents an opportunity to enlarge and extend methodological practice with the inclusion of new disciplinary approaches (see Siemens 2010). However, researchers using social science methods must move beyond their disciplinary training into “unmarked” territory, often with limited disciplinary support or guidance (see Collins, Bulger, and Meyer 2012). Some literary scholars are employing social science methodologies such as interviews, ethnographies, surveys, and statistical data analysis in their research (some examples include: Galey and Ruecker 2010; Siemens et al. 2009; Hoover 2008). Despite increasing need and expectation for studies of use and users, few humanities programs provide support or training in this area. Our interdisciplinary panel draws on the expertise of literary scholars and social scientists to explore several strategies that can support the adoption of social science methodology in literary studies and other humanities research.
(1) First, Social Scientists themselves can provide examples of how these research methodologies might be employed within the study of the Humanities by participating directly in this research and presenting at a variety of disciplinary-oriented conferences. In a series of projects starting in 2008, Eric Meyer and his colleagues found that understanding users and uses is increasingly important in the digital humanities as research becomes more dependent on shared digital resources. As part of our roundtable discussion, Meyer will describe the online toolkit developed in response to these findings that allows non-experts to use quantitative and qualitative tools to understand the kinds of impacts digital humanities materials are having. The toolkit includes tools that range from focus groups and interviews to webometrics, log analysis, surveys, and Twitter tracking. Meyer will briefly demonstrate the toolkit, but will mainly focus on the case studies done by humanities experts, and on their reflections about the process. Emerging from discussion of these case studies will be Meyer’s key argument, that training and support for domain experts in the humanities to use social science research methods can be more powerful than bringing in external social science or computing specialists who may understand measurement, but are less likely to understand the context of the resources and the communities who rely on them.
(2) Lynne Siemens will argue that a second strategy to support adoption of social science methodology involves creating opportunities for discussion and collaboration between Social Scientists and Humanists through development of online resources and involvement in interdisciplinary centres (For example, see Centre for Research in the Arts 2011; King’s College London 2012; Digital Humanities Center for Japanese Arts and Cultures 2008). Siemens will discuss how these relationships increase the likelihood for those “accidental meetings” and planned sessions which serve as a foundation for interdisciplinary innovation, collaboration and knowledge sharing (Cech and Rubin 2004). As University of Victoria’s ETCL has found, associated Social Scientists can provide mentorship and guidance to students and researchers in the adoption of this methodology, particularly around the development of interview scripts, surveys and ethics applications (ETCL nd-b).
(3) Given that researchers learn appropriate disciplinary methodology during graduate training, which is later reinforced through institutional rewards and recognition policies (Bruhn 2000; Gold and Gold 1985), a third strategy must be to create opportunities that allow Humanities scholars to explore Social Science methodology. Dana Solomon and Lindsay Thomas, current doctoral candidates at the University of California, Santa Barbara, will describe their experience in performing usability studies of the Research- oriented Social Environment (RoSE) is a system that encourages users to seek out relationships between authors, works, and commentators–living and dead–as part of a social network of knowledge (funded by a National Endowment for the Humanities Digital Humanities Start-Up Grant, and directed by Professor Alan Liu). Solomon and Thomas will describe their process of examining how RoSE users interact with other people and documents in and through the system, providing an account of their experience applying social science research methods to study users and uses.
(4) James Kelley will further examine the process of learning and applying social science methods to literary research. Kelley applied grounded theory, a method widely used in qualitative research in the social sciences, to explore what teachers generally say and do not say when talking in online forums with students and with each other about assigned novels. He will describe his method for analyzing over 5,466 posts and conclude by addressing the additional challenge of explaining the value and practical methods of grounded theory to audiences who are largely unfamiliar with it.
Share on Facebook
January 16th, 2012
During graduate school, I participated in an experimental seminar, “Literature+: Cross-Disciplinary Models of Literary Interpretation,” taught by Alan Liu. He asked students to form groups around topics of their choosing and perform analyses using digital tools on their materials. Most students shared similar research interests and organized their projects around a content-based theme. Our group represented four different disciplines and formed around our interest in digital tools, rather than content. Professor Liu created a toybox of links to various textual analysis tools that generated visualizations, translations, data about word counts, etc. Each of us took a tool in which we were to become “expert,” and applied that tool to data we had collected for our research.
In our recently published book chapter, “Interdisciplinary Knowledge Work: Digital Textual Analysis Tools and Their Collaboration Affordances” our motley team discusses how we applied these digital tools to our research goals and collaborative work. The most important lesson our collaborative experience taught us is that working together both pushed us and liberated us to experiment with our data and methods. In fact, much like our visualizations provide a big picture view of the texts we study, the multidisciplinary nature of our process forced us to step back and view our research at a macro-level. Although our collaboration began as a class project, playing together with technologies led each of us to new and significant understandings of our texts.
Share on Facebook
April 12th, 2011
Eleven years after Brown & Duguid (2000) released their Social Life of Information, we find that even in humanities, a field that typically conjures an image of a lone scholar toiling in dusty archives, the process of research is very much a social endeavor. Last week, in collaboration with the Research Information Network, we released Reinventing Research? Information Practices in the Humanities, a study of 54 humanities scholars across disciplines such as history, English, and philosophy in 25 institutions in 5 countries. Through interviews, focus group discussions, and web history logs, we examined their use of information, dissemination practices, and collaborative activities.
The scholars we interviewed described the tradition of collaboration within their respective disciplines. Unlike the sciences, in which research frequently involves large teams and multi-authored articles, collaboration in the humanities is more nuanced. One of our case studies, The Digital Republic of Letters, traces correspondences during the Enlightenment. These correspondences include letters from Descartes, Van Gogh, and Grotius, among others. The centuries-old collaboration methods examined by this group underlie current practice. Then, letters sent back and forth reported, unpacked, tested, and developed theories. Sound familiar? The description could easily be applied to e-mail, seminars, conference presentations, or hallway discussions. Research then and now begins with the sharing of ideas.
While not overtly collaborative in the scientific practice of the term, humanities scholars engage in research that “is done in conversation.” In addition to the above examples, scholars engage this conversation through their work in archives, when they prepare materials to be digitally accessed, when they report on rare materials, making previously obscure knowledge available to a larger public. They support each other in their work by talking through ideas and texts, presenting preliminary ideas that later become papers or monographs. Primarily, their research practices are source-intensive, but the sense-making process is very much accomplished in community.
Share on Facebook
February 27th, 2010
Yesterday, I attended the RoSE Design Charrette, hosted by the Transliteracies Project at UCSB. Under Alan Liu’s guidance, many interesting, innovative interdisciplinary projects have emerged from this program.
In anticipation of attending, I read the online description of RoSE. Like seeing a movie preview, I started to imagine what I thought RoSE would be. I hoped for an improvement on current content delivery systems, in particular, I wanted something that could ease the challenge of finding high-quality information online. A couple years ago, I had a conversation with one of Transliteracies’ project members, Pablo Colapinto. As most good conversations happen, this one was over lunch and I was expressing my dissatisfaction that we weren’t living like the Jetsons yet, in particular, how they could tell their computer what they wanted to eat and it would appear on a conveyor belt.
As our conversation continued, Pablo asked me, “How would you like your information served?” This question has stuck with me, because it seems so necessary and practical, and yet doesn’t seem to be addressed by current systems. Why, when engaging in search or any use of the Internet, can’t the user filter information according to demographics or other preferences?
To a certain extent, online advertising practices provide a model for this type of targeted content delivery. Using psychographic (e.g., attitudes or opinions), sociographic (e.g., purchasing behaviors) and demographic (e.g., age, ethnicity, location) data, advertisers create profiles of specific types of users. Online user behaviors are culled by Google Analytics, Facebook Connect, and many others. Our offline data is also collected and connected to our online behaviors. Offline data aggregators include Acxiom, Experian, and TargusInfo. Combining data about what we buy with information about what websites we frequent, advertisers put us into affinity segments, which are basically groupings that reflect our behaviors and preferences. For example, someone who recently purchased a BMW and read TripAdvisor reviews for the Sofitel is likely to be put in the luxury segment. Likewise, someone who booked a Disney vacation and researched vaccination information would probably get categorized into a parenting segment. Using behavioral data of others in the same affinity segment, advertisers can predict the types of ads likely to interest you.
The problem here, of course, is that it’s creepy for our browsers or search engines to start targeting content delivery based on behaviors/preferences we’re not 100% aware are being collected. However, if so much data is being collected about me, I’d like to use it to make my life easier. There’s an obvious tension between privacy and convenience.
The process could be transparent. I could set my browser or search engine preferences to deliver information based on my preferences. Just like ordering a sandwich, where you get a list and tick the boxes you’d like, I’d like a preferences option with drop-downs where I could say how I want my information delivered. I could finally have the option of filtering for the types of websites I’ll actually read.
Given the data that Google and others collect, it seems completely possible for me to enter my age, location (although I think Google already knows this), and an interest, say ‘teaching’ into a Google search and the search engine could target its results based on what others with my similar preferences selected. The results would be similar to Amazon’s “people who purchased this book also bought…” or “people who viewed this page ended up…” In essence, I could be served the information I want using filters based on information Google already collects, and using affinity segmenting to determine what I might like based on the behaviors of others with my shared filters. I think to a degree this already happens, but is often apparent in ads/sponsored links (and the process isn’t transparent), rather than actual content filtered on projected usefulness/relevance.
These preference filters could sit on top of existing search algorithms. For example, when I search for Gran Torino, my preference filters would indicate that I’m interested in teaching, so I’d be served content based on what others interested in teaching viewed in relation to Gran Torino. Ideally, instead of getting all search results related to the film, I would receive listings targeted toward classroom use or discussions related to education. At the very least, perhaps search results could be filtered based on what others in my affinity group viewed, so it would prioritize reviews, places to purchase, etc., that I’m most likely to visit and save me a bit of sifting.
Borrowing the model of affinity marketing, by categorizing myself based on preferences, I could benefit from the collective behaviors of people with shared interests. This preference filter would make the current obtuse practice transparent and adds convenience to my search.
Joseph Turow addressed a potential downside of targeted information delivery in his lecture “When the Audience Clicks: Buying Attention in the Digital Age” presented at the Oxford Internet Institute. There’s a danger that once my Google searches and news viewing starts to be delivered according to my preferences, I limit myself to only viewing information tailored to me. Potentially, as targeting becomes more sophisticated, we may lose the breadth of information currently provided by newspapers, television, and our expansive searches, and only receive information that confirms our beliefs or supports our preferences. While to a certain extent, current media options already allow us this option (e.g., Fox News), we may further limit ourselves as content delivery becomes more targeted.
So, how to tame the super-sized information portal that is the Internet without sacrificing the breadth and choice we love? I’d like a balance between sifting through a mountain of results to find a few relevant links and restricting myself from broader views based on preferences I select. Seems possible. Perhaps we don’t need to keep the filters on all of the time, but they’d be there when we need them.
As it turns out, RoSE addresses issues of humanities scholarship, namely, identifying relationships between authors and their work. While it didn’t fulfill my Jetsons’ fantasy of chocolate cake on demand, it did prompt me to dream for a bit and envision the type of search I’d like to use.
Share on Facebook