November 22nd, 2013
To honor George Eliot (Mary Ann Evan)’s birthday, today’s post is about words and movement…
At a college dinner with a bunch of tech folks, we joked once that people have a poem or two in their conversational arsenal to quote at toasts or to reference when discussion warranted. The few times I’ve been asked to name a favorite poem, I’ve realized I could no sooner name a favorite poem than a favorite ice cream. It depends on mood, doesn’t it? Every day life calls for Mary Oliver, romantic moments Neruda, at least, if not Keats, Wordsworth, Sharon Olds, cynical and thoughtful moments, a host of poets. I love how Olds can make an orgasm fall off the page, or how Borges can challenge the notion of boundaries and definitions (I’m counting “On Exactitude” as a poem). Short stories for me often feel closer to poems than to novels. Pressed for favorites, I’d have to start with Kate Chopin, “Story of an Hour,” or “The Storm,” but I love the vivid, malleable world created by Marguerite Yourcenar in “How Wang Fo was Saved,” or a different world altogether created by Raymond Carver in “Why Don’t You Dance?” It’s challenging to achieve movement in words and when a writer achieves rhythm and motion in language, the experience is bliss. The writings of Louise Erdrich, her late husband Michael Dorris, and Amy Tan, Paolo Coehlo, Maya Angelou, George Eliot, Gabriel García Márquez seem to hum and invite us into their dance, but I’ve now digressed into novels. Ted Kooser used to send Valentine poems in the form of small postcards, which I use as bookmarks. Whenever I come upon them, it feels like a slice of tranquility, a moment caught listening to crickets or finding fireflies. Poetry for me feels like lazily skimming the surface while someone else takes the oars, always feeling the invitation to plunge further into the depths.
Share on Facebook
July 8th, 2013
Also posted at DML Central: http://dmlcentral.net/blog/monica-bulger/trouble-testing
It’s obviously summer because my news alerts are no longer steadily reporting concerns about education, our children’s future, the problems with teachers, etc. Perhaps now, then, is the perfect time to address the issue of testing and its troubles, while a little distance might provide perspective. So, why do we test? What do we hope the tests will achieve? Last summer, Thomas Friedman suggested that parents and teachers view classroom performance as CEOs do economic performance to keep us competitive and to overcome our “education challenge.” In this light, testing helps us know where we stand in relation to other countries. Presumably, this knowledge will improve local performance because teachers can target areas of weakness. Yet learning research consistently shows that an emphasis on test scores does not necessarily lead to gains in academic performance. Perhaps learning, with its long-term gains and diffuse experiences does not lend itself well to an economic model. Instead of focusing on test scores at the elementary and secondary levels, why not take a longer-term view? Why public education? What are our true goals for teaching and learning? When pressed, most politicians will state that the long-term goals of education are to develop a citizenry that maximizes contributions to society and economy, yet our measures of success typically do not focus on those who embody these qualities. It seems that to move toward a goal of an educated, engaged citizenry, we should consider the skills, aptitudes, and experiences of those among us who do positively contribute to society, who embody the values we seek to instill, and measure our school systems in terms of these long-term outputs.
Current testing mostly focuses on short-term achievement without considering long-term goals. For example, after our early 20s, most of us don’t remember quadratic equations, regardless of how well we tested in our teens, but perhaps the lesson wasn’t the equation itself, but the process of understanding or not understanding it. Learning is about more than its parts, it is about perseverance, to pursue a thing regardless of its obstacles, to adapt ideas and frameworks in light of new challenges and uncertainty. Yet these lessons aren’t adequately measured in national or international tests, not because of lack of interest but because of their intangibility. How many high school students who many considered ‘least likely to succeed’ did anyway? When we limit our measures of potential to fragmented subject knowledge, we risk missing what truly contributes to fostering the citizenry to which we aspire.
Ticking boxes and demonstrating an understanding of concepts in isolation, skills measured by PISA and ETS, do not prepare students well for demands of university learning or future careers. Established predictors of academic success—patience, perseverance, and adaptability—are not measured by these exams. Another consistent predictor of academic and professional success, delayed gratification—discovered in the now infamous marshmallow study that tested children’s ability to forego short-term gratification in favor of long-term gain—cannot be measured by the PISA tests.
Also not reported through PISA is the ability to crossover between subjects and concepts, to see the applications of and relationships between ideas beyond a single instance. The ability to imagine, to be creative, to adapt ideas regardless of challenges or obstacles are predictors of innovation. In these skills, we could argue that many graduates of the U.S. school system have far surpassed their counterparts. For some reason, despite our ‘average’ scores, the U.S. continues to be a major contributor to technologies that change the world: the Internet, Facebook, Google, the iPhone, to name but a few examples, not to mention innovative contributions in the areas of entertainment, health, and science. So perhaps testing well isn’t the only measure of success.
In looking at examples from Finland and South Korea, we want to take the learning gains a la carte, apart from the social systems that may foster them. We desire learning gains in mathematics, for example, but not a nationalized health system. Often absent from discussions of testing and improvements to education is the parent’s role in their children’s academic performance and future career choice (Friedman provides an excellent overview of research in 2011). How do parents in these high performing countries support their children’s learning? Consistently in the U.S., parent’s education level is shown to be a predictor of academic success, yet corporate models of increased testing and academic competitiveness ignore the role of the parents. There is a gap in expectation, we expect teachers to improve, school systems to improve, indeed even children to improve, but we do not consider what kinds of support parents might need in order to cultivate adequate learning environments for their children outside of the classroom. We do not expect parents to play a stronger role in prioritizing education and further do not offer an infrastructure to aid parents in supporting their children’s learning experiences.
Friedman accurately states there is a mismatch in worldviews. The aggressive rhetoric of ‘education challenge’ seems unhelpful though and I recommend instead focusing on the questions David Graeber posed, why are there no flying cars? If an entire nation of schoolchildren scored perfectly on the standardized tests now, what would our society be like in 15-25 years? Would we enjoy more social equality, enlightened distribution of resources, flying cars? As a nation, we need to be more clear about what it is we’re aiming at and the best place to start is to look around at the people who already embody these qualities and consider how our school system and larger society fostered their development.
Share on Facebook
May 28th, 2013
As an Internet researcher, I know the answer to this question: no. That’s right: no. Try as we might, we cannot completely control who views parts of our Facebook profiles. Using avatars and not posting an identifying photo may help ensure privacy, but in my experience, most friends use their real names or post an identifying picture, or can be identified through other friends.
I study digital literacy and teach digital literacy. Part of my research is understanding privacy and how to protect it online. Yet in the past year, I haven’t been able to sort out privacy on FB photos. I put time into this and yet, it’s still very confusing. Tonight, I chose the ‘view as’ and ‘public’ option to see how much of my relatively locked-down profile is available for public viewing. A few old profile pics and new cover photos appeared, apparently caught between the older privacy settings and new ones. If you’re reading this and thinking of the five step procedure I didn’t follow, stop. That’s my point. I’ve put a reasonable amount of time into understanding the settings and yet there I am, out for the world to see*.
It should be easier and more intuitive to safeguard my privacy if I want to.
In my online child protection research, when I’m talking to parents, most are unaware that pics they post of their kids can be viewed by friends of their friends, not just their immediate friend circle. If anyone comments on the picture, depending on settings, then their friends see the image, too. Any of us can drag friends’ pictures to our desktop and then share however we’d like.
Ok, adults struggle with the privacy settings, that might be a given. But how do we expect kids to understand the settings?
Invoke the digital natives argument if you must, that younger generations naturally understand these things better than those of us born before 1984 (see Helsper & Eynon’s articulate challenge to this argument). Or argue that kids and teens have different attitudes toward privacy than older generations. But for kids who want to safeguard against public exposure, how do they navigate the settings?
If a key recommendation for children and teens to safeguard themselves is to limit sharing photos, does FB provide the means to do so? This need for privacy is counter to Zuckerberg’s vision of living in public, of sharing more rather than less. Despite FB’s revised privacy screens, limiting sharing still seems to more challenging than necessary. For photos, it seems that each picture would need to be painstakingly tagged to limit the audience. I’ve seen advice that if you don’t want something publicly shared, don’t post it on the Internet. It seems there should be some middle ground, a very clear means of protecting privacy, not just on a per picture basis, but on the profile-level, maybe a reversal, where the default is friends of friends and then expanding the audience requires customization. Thoughts?
*Note: I share the same first and last name with a couple users, so I do have a bit of anonymity.
Share on Facebook
April 28th, 2013
“It’s the great thing about code,” he said of computer language. “It’s largely merit-driven. It’s not about what you’ve studied. It’s about what you’ve shipped.” –Jade Dominguez quoted in NYT, 27 April 2013
Gild, a San Francisco start-up, is taking the Money Ball approach to identify potentially overlooked talented programmers. As reported in the New York Times, the company uses an alogrithm to find productive and well-respected programmers who may not have the traditional qualifications, such as graduating from a top school, working for a top company, or being referred by a current employee.
‘The start of something powerful’
This is more than a story of an interesting, possibly controversial new algorithm. Of course, the algorithm is limited to information that is measurable and publicly available. Even so, the traces we leave online potentially say a lot about us. Gild focuses on contributions to well-known programming forums, such as GitHub. The algorithm doesn’t stop at how many posts a programmer makes, but how each individual’s participation is valued by the programming community. Something that might not easily show on a resume, but could on a Google search is this participation and contribution, how much an applicant contributes to a community and how those bits of code are taken up. Skills-based, yes, but more so, how an applicant has demonstrated the capacity for engaging with a larger community through productive contributions.
The chief scientist at Gild, Dr. Vivienne Ming offers a unique perspective about gender bias. With a PhD in psychology and computational neuroscience, Dr. Ming had experience working as teacher and researcher before undergoing a gender transition. As a woman, she realized that colleagues treated her differently, for example asking fewer questions about mathematics, but more importantly, she points to a recent Yale study in which participants found women applicants less competent for management positions at a research university.
Perhaps a reduction of human bias isn’t necessarily a bad thing. Gild’s small team seems to think that the algorithm is more merit-based than traditional methods of employment. Perhaps reviewing tangible accomplishments is a powerful step toward reducing unnecessary legacy biases. Others might argue that the algorithm is limited to only measuring what can be measured.
Top schools have reputations for a reason, yet graduating from one doesn’t necessarily guarantee that a person plays well with others or is a creative, talented developer. Likewise, another study from Yale cited in the NYT article says that employee referrals depend on the productivity of the employee referring. On the other hand, selecting people who lack these traditional achievement markers is a gamble, one that Gild is currently testing through its own hires.
Returning to Jade Dominguez, who is quoted at the beginning of this post, he is a programmer found by Gild who scored highly through their algorithm and is one of their first 10 employees. He does not have a college degree, but does have a volume of code and a well-respected position in the coding community (according to Gild’s algorithm measures). His experience reflects substantial practice, trial and error, problem-solving, and a record of completing projects. Increasingly, there is pressure to teach these skills at early levels of education, even before high school. The practice, however, is something that has to be pursued by the individual and is traditionally a marker of success, as demonstrated by musicians, artists, athletes, any area that involves mastery. Can machines predict talent? Can they predict employability? It will be interesting to watch how potentially reducing human bias and focusing on merit might change the hiring landscape. Then, of course, no matter what the algorithm finds, there’s always the interview.
Share on Facebook
January 6th, 2013
Today I will be co-convening a session with Ray Siemens at the Modern Language Association’s annual conference on the topic of using and adapting research methods typically associated with social sciences to research in the humanities. Our panelists include colleagues Eric Meyer (OII), Lindsay Thomas and Dana Solomon (UC Santa Barbara), James Kelley (Mississippi), and Lynne Siemens (University of Victoria).
Holders of digital collections are increasingly asked to demonstrate their impact as they seek additional resources for maintenance and growth, but their training as experts in the humanities content of the collections has not often included the social science and measurement skills needed to understand uses. Technological advances in the humanities have led to a re-envisioning and re-interpreting of traditional approaches and presents an opportunity to enlarge and extend methodological practice with the inclusion of new disciplinary approaches (see Siemens 2010). However, researchers using social science methods must move beyond their disciplinary training into “unmarked” territory, often with limited disciplinary support or guidance (see Collins, Bulger, and Meyer 2012). Some literary scholars are employing social science methodologies such as interviews, ethnographies, surveys, and statistical data analysis in their research (some examples include: Galey and Ruecker 2010; Siemens et al. 2009; Hoover 2008). Despite increasing need and expectation for studies of use and users, few humanities programs provide support or training in this area. Our interdisciplinary panel draws on the expertise of literary scholars and social scientists to explore several strategies that can support the adoption of social science methodology in literary studies and other humanities research.
(1) First, Social Scientists themselves can provide examples of how these research methodologies might be employed within the study of the Humanities by participating directly in this research and presenting at a variety of disciplinary-oriented conferences. In a series of projects starting in 2008, Eric Meyer and his colleagues found that understanding users and uses is increasingly important in the digital humanities as research becomes more dependent on shared digital resources. As part of our roundtable discussion, Meyer will describe the online toolkit developed in response to these findings that allows non-experts to use quantitative and qualitative tools to understand the kinds of impacts digital humanities materials are having. The toolkit includes tools that range from focus groups and interviews to webometrics, log analysis, surveys, and Twitter tracking. Meyer will briefly demonstrate the toolkit, but will mainly focus on the case studies done by humanities experts, and on their reflections about the process. Emerging from discussion of these case studies will be Meyer’s key argument, that training and support for domain experts in the humanities to use social science research methods can be more powerful than bringing in external social science or computing specialists who may understand measurement, but are less likely to understand the context of the resources and the communities who rely on them.
(2) Lynne Siemens will argue that a second strategy to support adoption of social science methodology involves creating opportunities for discussion and collaboration between Social Scientists and Humanists through development of online resources and involvement in interdisciplinary centres (For example, see Centre for Research in the Arts 2011; King’s College London 2012; Digital Humanities Center for Japanese Arts and Cultures 2008). Siemens will discuss how these relationships increase the likelihood for those “accidental meetings” and planned sessions which serve as a foundation for interdisciplinary innovation, collaboration and knowledge sharing (Cech and Rubin 2004). As University of Victoria’s ETCL has found, associated Social Scientists can provide mentorship and guidance to students and researchers in the adoption of this methodology, particularly around the development of interview scripts, surveys and ethics applications (ETCL nd-b).
(3) Given that researchers learn appropriate disciplinary methodology during graduate training, which is later reinforced through institutional rewards and recognition policies (Bruhn 2000; Gold and Gold 1985), a third strategy must be to create opportunities that allow Humanities scholars to explore Social Science methodology. Dana Solomon and Lindsay Thomas, current doctoral candidates at the University of California, Santa Barbara, will describe their experience in performing usability studies of the Research- oriented Social Environment (RoSE) is a system that encourages users to seek out relationships between authors, works, and commentators–living and dead–as part of a social network of knowledge (funded by a National Endowment for the Humanities Digital Humanities Start-Up Grant, and directed by Professor Alan Liu). Solomon and Thomas will describe their process of examining how RoSE users interact with other people and documents in and through the system, providing an account of their experience applying social science research methods to study users and uses.
(4) James Kelley will further examine the process of learning and applying social science methods to literary research. Kelley applied grounded theory, a method widely used in qualitative research in the social sciences, to explore what teachers generally say and do not say when talking in online forums with students and with each other about assigned novels. He will describe his method for analyzing over 5,466 posts and conclude by addressing the additional challenge of explaining the value and practical methods of grounded theory to audiences who are largely unfamiliar with it.
Share on Facebook
December 3rd, 2012
Today I’m giving a lecture about learning environments that promote interdisciplinary dialogue in Internet Science. After 10+ years working in an interdisciplinary space, I take for granted how much easier it has become, I forget the many times I sat through lectures that were like a foreign language where every third word made sense. I also forget how difficult it was to start talking to people in other disciplines, because graduate students already had their cohort, faculty their students, so showing up and not speaking their language meant it took time to be part of their conversations.
During the summer, I convened, along with Cristobal Cobo, Tim Davies, and Ian Brown, a summer school as part of the Network of Excellence in Internet Science. It was a week-long program for early career researchers to engage in interdisciplinary discussions. The topic was privacy, trust, & reputation and each morning, two invited lecturers would discuss these topics from their disciplinary perspectives. We invited faculty from computer science to discuss the technical dimensions of privacy, law professors to explain what is and is not feasible to regulate, and an educational researcher who asks students to draw pictures of who they think connects to them on the Internet. This approach did not seem to ruffle any feathers. However, the afternoon sessions were a more challenging sell.
We blocked out the afternoons for interdisciplinary discussion. I modeled the discussions after seminars offered by UCSB’s CITS PhD emphasis, in which we select a topic, for example “reading,” “trust,” “mobile phones” and graduate students each present research questions and methods their discipline would use to approach the topic. For the summer school, we had to convince colleagues at each stage that it was definitely a good idea to give the participants three hours per day for interdisciplinary discussion. After the first day, we had to convince the participants it was a good idea. By the third day, they were busy talking to each other.
There are some questions that cannot be answered by a single discipline. This challenge existed before the Internet (a recent example provided by Radiolab: why isn’t blue mentioned in The Odyssey and The Iliad?), but like information, communication, even love, the Internet magnifies and accelerates it. Disciplinary silos do not serve Internet research well and really, how could they? How can we approach any experience on the Internet without considering the technical backbone behind it? How can technologies be developed without considering user needs and behaviors? How can we understand the Internet from a purely scientific view without considering the art that makes it work, that people continue to find new ways of using it, that these uses continue to surprise and challenge, that technologies tend to serve other purposes than they were initially intended and that why this happens is worth studying.
Much Internet research has spanned disciplinary boundaries and enabled us to better understand
–why and how people organize online and off,
–which groups are excluded from fully participating in the Internet and why,
–the challenges of protecting personal information, why it might matter and why it might not,
–how knowledge is developed and shared, individually and institutionally
I was motivated to continue attending seminars and lectures outside my field because I had a question I couldn’t answer…I wanted a way to measure students’ online research practices, to understand why they selected certain sources and how they used them. Despite how initially uncomfortable it was for an Education student with no programming background to attend courses in Computer Science, or with a limited quantitative research background to attend Cognitive Psychology courses, I was motivated to keep trying.
Why not interdisciplinary?
As part of the summer school, we asked students what barriers prevented them from pursuing interdisciplinary work. Some said they didn’t really have an interest in working with other disciplines, or didn’t know anyone to collaborate with in their area of interest. Others said that funding was more of a challenge when working across disciplines, or they had experienced difficulty in being accepted to conferences or having articles accepted for publication because their interdisciplinary work did not fit. Others said there was no professional incentive to collaborate because the journals and conferences they would submit to were not viewed as prestigious within their departments. The responses were similar to those I heard in informal conversations as a graduate student. These are fair points. Interdisciplinary work is still in its infancy and despite efforts by AoIR and the increasing number of interdisciplinary journals, publications and professional recognition still seem to be exceptional. However, as demonstrated by initiatives in the digital humanities, in the small, but growing number of interdisciplinary departments, institutions can slowly change the paradigm of recognition.
Where to start?
NSF Interactive Digital Multimedia IGERT
During my graduate years at UCSB, several interdisciplinary opportunities were emerging. The National Science Foundation funded students through their IGERT program to work in interdisciplinary groups to develop projects. IGERT students were drawn from across campus, including computer science, cognitive science, art, and geography. All students were awarded tuition plus a stipend and travel funds and were given a collaborative workspace. The program required that students work in small groups with other IGERT students and attend a Friday seminar on interdisciplinary topics related to their research.
In my experience, while funding was abundant, faculty support never materialized. The Friday lectures always seemed to be on very specialized topics, without an organizing theme. Student projects were generally based on the PIs interest, rather than generated by the students and those not in engineering did not receive as much mentorship. Yet the projects afforded an opportunity to discuss disciplinary approaches and discover complementary questions and methods. Unfortunately, many students encountered difficulties in publishing their interdisciplinary work or being accepted to conferences, so prioritized their disciplinary work.
Cognitive Science PhD emphasis
Based on interest from engineers studying artificial intelligence and geographers trying to get a better sense of how people approach maps, the Department of Psychology started offering a seminar series that combined a few quarterly lectures on interdisciplinary topics with a weekly student seminar. The seminar was particularly targeted toward students and faculty outside of Psychology, so provided relevant background information and an overview of methods at the start of each quarter. Faculty were invited from across campus and usually brought interested graduate students.
The Cognitive Science emphasis allowed graduate students from outside Psychology to take coursework and receive an emphasis on their diploma in Cognitive Science. In addition to the coursework, participants were required to have two members of the interdisciplinary emphasis on their dissertation committee, to complete a research paper or proposal in Cognitive Science, and for cognitive science to be a central focus in their dissertation.
Also on Friday afternoons, the course discussion was lively, often resulting in long disagreements over disciplinary assumptions. Normally, students continued the discussion over drinks or dinner, so the emphasis also succeeded in creating an interdisciplinary community.
Center for Information Technology & Society PhD emphasis
Core faculty from Political Science, Computer Science, Film Studies, Sociology, and English started a lecture series around 1999 that grew into a graduate seminar about four years later. The strength of the seminar was that a core group of faculty was always present, sometimes outnumbering the students, to discuss their discipline’s approach to whatever was the topic of discussion. As mentioned above, early seminars were organized around a theme, for example, mobile phones, and each week two graduate students and sometimes faculty would suggest readings to the group and discuss research questions they would ask around the topic. The faculty modeled respectful dialogue, but pushed each other and the students to challenge their disciplinary assumptions.
The graduate seminar series also evolved into a PhD emphasis, allowing students to receive recognition for coursework completed in the area of Technology & Society. Similar to the Cognitive Science emphasis, the Technology & Society emphasis drew an interdisciplinary faculty steering committee from across campus. In addition to the seminar, coursework included courses in two areas: Culture & History and Society & Behavior. Courses were offered through several departments including Anthropology, Environmental Science and Management, History, Education, and Communication (in addition to the disciplines listed at the beginning of this section). Students were required to have a faculty member from the steering committee be part of their dissertation committee and to complete a dissertation that related to Technology & Society.
The program modeled interdisciplinary dialogue and provided opportunities for students to work on research projects with faculty and students from other departments. In fact a few faculty received grants specifically to foster interdisciplinary collaboration and created strong cohort relations through these research opportunities.
[stay tuned...more to come]
Share on Facebook
April 25th, 2012
Funding bodies are increasingly requiring evidence of impact for higher education efforts in outreach and public engagement, yet measuring this impact is challenging. A review of current practice combined with interviews of public engagement experts in the UK underscored the degree to which outcomes of public engagement and outreach efforts are often not immediately visible, but rather diffuse and developed over time.
This challenge in measuring impact was a main point of conversation on Friday when the OII hosted the judging panel for our first-time European Competition for Best Innovations in University Outreach and Public Engagement, supported by the European Commission as part of the ULab project. The panel consisted of experts in public engagement from the ULab partner countries. Judges represented a range of backgrounds, including funding officers, policymakers, and those engaged in award-winning outreach.
When reporting impact, the majority of entries cited audience size, whether measured by attendance, web traffic, web clicks, or distribution of printed materials. Yet, this reporting usually did not directly measure progress on the stated aims of the projects. For example, if a project aims to change perceptions, or increase engagement with science, simple attendance at a fair or visits to a website cannot show whether these objectives were met. Tracking whether the person then attends another science event, enrolls in a science class, or undertakes a degree in science may provide a stronger measure, but still the extent of influence of a particular event is unknown. Of course, the problem is that indicators of input are more easily and economically gathered, than of impact, which are costly and an additional activity, which may not yield definitive answers.
However, some entries used participant surveys to gather more details about impact and experience. These ranged in depth from like/dislike buttons to short-answer questions about learning. Some projects engaged their participants in focus group discussions aimed at improving their offerings and better meeting the needs of their target audience. A few looked at impacts over time, showing shifts in community engagement, or increases in student enrollment.
The question of depth versus breadth of impact was debated among our judging panel, showing that even experts have diverse opinions of what constitutes models of good practice in public engagement. We found that entries from a number of European countries focused on the numbers of people reached as a measure of success. For example, events such as the Researcher’s Nights hosted throughout Europe aim to engage large numbers of community members in short events and therefore measure success by attendance levels. By way of contrast, several entries from the UK place an emphasis on depth of engagement. In other words, a project may have less than 20 participants, but have weeks or months of interaction that leads to a change in behavior, for example, a demonstrated interest in science, or a better understanding of a different social perspective. Some judges placed more emphasis on breadth, while others focused on depth.
Overall, even in top-ranked entries, the judges found that impact measures were not always ideal or fully developed and recommended that the projects should consider developing more evidence of impact. One of the winning entries, Active Science, demonstrated a multi-faceted approach to impact measures. In addition to reporting on audience attendance, the programme also demonstrated its success through tracing its expansion from a local to nation-wide project. To measure whether the project was successful in shifting perceptions about science, participants were given pre- and post-activity questionnaires that measured their attitudinal approaches.
Matching clearly defined objectives to appropriate measures may improve reporting and evaluation of impact. The wealth of activities and approaches reported in the EngageU entries provide a repository for strong examples of impact measures as well as a means of comparisons for how projects with similar objectives measure their impact.
Of course, you may have very different views on the centrality of impact assessment for outreach. We would appreciate hearing from you.
Share on Facebook
April 25th, 2012
Vicki Nash and I were successful in our bid to the Fell Fund to examine how risk and harm are used in literature addressing children’s use of the Internet. Vera Slavtcheva-Petkova, a recent graduate of Loughborough University, joined us in December and has been making swift progress through the mountain of relevant literature. To date, we have collected nearly 1243 documents, mostly peer-reviewed journal articles, and a solid proportion of reports, working papers, and book excerpts. By incorporating ‘grey literature,’ generated by practitioner groups such as charities and law enforcement with academic articles we aim to develop a more complete understanding of harm as it is experienced by young internet users. Particularly, we want to better understand the frequency and severity of occurrences and use this understanding to evaluate interventions.
To organize ourselves, we are focusing on empirical studies to tease out how and whether harm is operationalized, looking for documented incidents of harm or measures used by practitioners to identify harm. Early on, we realized that interviewing experts in law, law enforcement, psychology, social work, education, and outreach, would lessen our chances of missing key work in the field. In addition, our expert contacts provide context and overviews as we move across different disciplines and perspectives.
Our work has taken us into the areas of cyberbullying, self-harm, and sexual abuse, certainly the darker side of the Internet. We are also exploring current legislation and interventions in several countries to get a sense of successes and challenges in approaching this issue.
At this point, we are 3/4 of the way through our large research corpus and look forward to sharing early observations very soon.
Share on Facebook
January 16th, 2012
During graduate school, I participated in an experimental seminar, “Literature+: Cross-Disciplinary Models of Literary Interpretation,” taught by Alan Liu. He asked students to form groups around topics of their choosing and perform analyses using digital tools on their materials. Most students shared similar research interests and organized their projects around a content-based theme. Our group represented four different disciplines and formed around our interest in digital tools, rather than content. Professor Liu created a toybox of links to various textual analysis tools that generated visualizations, translations, data about word counts, etc. Each of us took a tool in which we were to become “expert,” and applied that tool to data we had collected for our research.
In our recently published book chapter, “Interdisciplinary Knowledge Work: Digital Textual Analysis Tools and Their Collaboration Affordances” our motley team discusses how we applied these digital tools to our research goals and collaborative work. The most important lesson our collaborative experience taught us is that working together both pushed us and liberated us to experiment with our data and methods. In fact, much like our visualizations provide a big picture view of the texts we study, the multidisciplinary nature of our process forced us to step back and view our research at a macro-level. Although our collaboration began as a class project, playing together with technologies led each of us to new and significant understandings of our texts.
Share on Facebook
September 1st, 2011
This post is not going to promise dramatic learning gains from using a new technology. It’s not one of those stories where at first a teacher was skeptical, but in the end, the classroom was like a sports movie where the technology scored the winning homerun. I feel skeptical when I read those stories. I don’t doubt the success, but I wonder whether the learning gains, increased student interest/participation, or higher levels of reported satisfaction have less to do with the iPad, blog, twitter stream, or virtual environment and more to do with who is in the classroom.
Cathy Davidson recently described an idyllic experience of teaching a course in which she and the students shared in the discovery of new applications of technologies for learning. She describes the process of developing the course, the thrill when the students actually invited and facilitated a guest lecture, and the ways in which the students challenged her to really be collaborative, even in grading.
If we step back for a moment, though, and consider a class with Davidson and those same students without the new technologies, what would the learning experience be like? I imagine it would still be exceptional, because Davidson is an obviously engaged teacher and the students are obviously engaged learners. She employs teaching strategies that were effective before the new technologies she describes. In particular, she encourages students to take ownership of their learning experience and creates a flexible environment to support whatever direction they take. When developing assignments, Davidson incorporates research in motivation, particularly students’ likelihood to put more effort into writing for an authentic audience. She also has deep experience with her topic and an obvious enthusiasm for both the content and the teaching. These factors are consistently linked to positive learning experiences in educational research. Additionally, the students clearly seem motivated to learn. She describes the class list as a diverse collection of disciplines, so the students appear to be choosing the course. They demonstrate active involvement with the assignments and content and even provide substantive feedback for future courses.
Davidson’s approach to her class corresponds with much of the research on good teaching. Now, if we imagine the same syllabus and same access to technologies, but with a different teacher, what happens? The course might still be exceptional, or it might not.
A common theme when addressing technology in education is a focus on the particular technology and the success or failure of its use. In David Risher’s recent article about educational technologies in developing countries, he urges consideration of ‘the triangle that connects students, teachers, and ideas.’ The way in which a teacher incorporates a technology, designs the learning environment, and promotes learning determines the ultimate effectiveness of the technology. Returning to Davidson’s example, the students are described as knowing they can take risks through the support and encouragement she provides. The technologies are secondary to the empowering environment she creates. In other hands, the students may have been focused on their screens, updating their Facebook profiles while the teacher lectured at the front of the room…a forgettable experience.
When describing the participatory culture new technologies afford, equally important is the teacher who brings these tools into the classroom — the tool merely plays a supporting role.
Share on Facebook