April 28th, 2013
“It’s the great thing about code,” he said of computer language. “It’s largely merit-driven. It’s not about what you’ve studied. It’s about what you’ve shipped.” –Jade Dominguez quoted in NYT, 27 April 2013
Gild, a San Francisco start-up, is taking the Money Ball approach to identify potentially overlooked talented programmers. As reported in the New York Times, the company uses an alogrithm to find productive and well-respected programmers who may not have the traditional qualifications, such as graduating from a top school, working for a top company, or being referred by a current employee.
‘The start of something powerful’
This is more than a story of an interesting, possibly controversial new algorithm. Of course, the algorithm is limited to information that is measurable and publicly available. Even so, the traces we leave online potentially say a lot about us. Gild focuses on contributions to well-known programming forums, such as GitHub. The algorithm doesn’t stop at how many posts a programmer makes, but how each individual’s participation is valued by the programming community. Something that might not easily show on a resume, but could on a Google search is this participation and contribution, how much an applicant contributes to a community and how those bits of code are taken up. Skills-based, yes, but more so, how an applicant has demonstrated the capacity for engaging with a larger community through productive contributions.
The chief scientist at Gild, Dr. Vivienne Ming offers a unique perspective about gender bias. With a PhD in psychology and computational neuroscience, Dr. Ming had experience working as teacher and researcher before undergoing a gender transition. As a woman, she realized that colleagues treated her differently, for example asking fewer questions about mathematics, but more importantly, she points to a recent Yale study in which participants found women applicants less competent for management positions at a research university.
Perhaps a reduction of human bias isn’t necessarily a bad thing. Gild’s small team seems to think that the algorithm is more merit-based than traditional methods of employment. Perhaps reviewing tangible accomplishments is a powerful step toward reducing unnecessary legacy biases. Others might argue that the algorithm is limited to only measuring what can be measured.
Top schools have reputations for a reason, yet graduating from one doesn’t necessarily guarantee that a person plays well with others or is a creative, talented developer. Likewise, another study from Yale cited in the NYT article says that employee referrals depend on the productivity of the employee referring. On the other hand, selecting people who lack these traditional achievement markers is a gamble, one that Gild is currently testing through its own hires.
Returning to Jade Dominguez, who is quoted at the beginning of this post, he is a programmer found by Gild who scored highly through their algorithm and is one of their first 10 employees. He does not have a college degree, but does have a volume of code and a well-respected position in the coding community (according to Gild’s algorithm measures). His experience reflects substantial practice, trial and error, problem-solving, and a record of completing projects. Increasingly, there is pressure to teach these skills at early levels of education, even before high school. The practice, however, is something that has to be pursued by the individual and is traditionally a marker of success, as demonstrated by musicians, artists, athletes, any area that involves mastery. Can machines predict talent? Can they predict employability? It will be interesting to watch how potentially reducing human bias and focusing on merit might change the hiring landscape. Then, of course, no matter what the algorithm finds, there’s always the interview.
Share on Facebook
January 6th, 2013
Today I will be co-convening a session with Ray Siemens at the Modern Language Association’s annual conference on the topic of using and adapting research methods typically associated with social sciences to research in the humanities. Our panelists include colleagues Eric Meyer (OII), Lindsay Thomas and Dana Solomon (UC Santa Barbara), James Kelley (Mississippi), and Lynne Siemens (University of Victoria).
Holders of digital collections are increasingly asked to demonstrate their impact as they seek additional resources for maintenance and growth, but their training as experts in the humanities content of the collections has not often included the social science and measurement skills needed to understand uses. Technological advances in the humanities have led to a re-envisioning and re-interpreting of traditional approaches and presents an opportunity to enlarge and extend methodological practice with the inclusion of new disciplinary approaches (see Siemens 2010). However, researchers using social science methods must move beyond their disciplinary training into “unmarked” territory, often with limited disciplinary support or guidance (see Collins, Bulger, and Meyer 2012). Some literary scholars are employing social science methodologies such as interviews, ethnographies, surveys, and statistical data analysis in their research (some examples include: Galey and Ruecker 2010; Siemens et al. 2009; Hoover 2008). Despite increasing need and expectation for studies of use and users, few humanities programs provide support or training in this area. Our interdisciplinary panel draws on the expertise of literary scholars and social scientists to explore several strategies that can support the adoption of social science methodology in literary studies and other humanities research.
(1) First, Social Scientists themselves can provide examples of how these research methodologies might be employed within the study of the Humanities by participating directly in this research and presenting at a variety of disciplinary-oriented conferences. In a series of projects starting in 2008, Eric Meyer and his colleagues found that understanding users and uses is increasingly important in the digital humanities as research becomes more dependent on shared digital resources. As part of our roundtable discussion, Meyer will describe the online toolkit developed in response to these findings that allows non-experts to use quantitative and qualitative tools to understand the kinds of impacts digital humanities materials are having. The toolkit includes tools that range from focus groups and interviews to webometrics, log analysis, surveys, and Twitter tracking. Meyer will briefly demonstrate the toolkit, but will mainly focus on the case studies done by humanities experts, and on their reflections about the process. Emerging from discussion of these case studies will be Meyer’s key argument, that training and support for domain experts in the humanities to use social science research methods can be more powerful than bringing in external social science or computing specialists who may understand measurement, but are less likely to understand the context of the resources and the communities who rely on them.
(2) Lynne Siemens will argue that a second strategy to support adoption of social science methodology involves creating opportunities for discussion and collaboration between Social Scientists and Humanists through development of online resources and involvement in interdisciplinary centres (For example, see Centre for Research in the Arts 2011; King’s College London 2012; Digital Humanities Center for Japanese Arts and Cultures 2008). Siemens will discuss how these relationships increase the likelihood for those “accidental meetings” and planned sessions which serve as a foundation for interdisciplinary innovation, collaboration and knowledge sharing (Cech and Rubin 2004). As University of Victoria’s ETCL has found, associated Social Scientists can provide mentorship and guidance to students and researchers in the adoption of this methodology, particularly around the development of interview scripts, surveys and ethics applications (ETCL nd-b).
(3) Given that researchers learn appropriate disciplinary methodology during graduate training, which is later reinforced through institutional rewards and recognition policies (Bruhn 2000; Gold and Gold 1985), a third strategy must be to create opportunities that allow Humanities scholars to explore Social Science methodology. Dana Solomon and Lindsay Thomas, current doctoral candidates at the University of California, Santa Barbara, will describe their experience in performing usability studies of the Research- oriented Social Environment (RoSE) is a system that encourages users to seek out relationships between authors, works, and commentators–living and dead–as part of a social network of knowledge (funded by a National Endowment for the Humanities Digital Humanities Start-Up Grant, and directed by Professor Alan Liu). Solomon and Thomas will describe their process of examining how RoSE users interact with other people and documents in and through the system, providing an account of their experience applying social science research methods to study users and uses.
(4) James Kelley will further examine the process of learning and applying social science methods to literary research. Kelley applied grounded theory, a method widely used in qualitative research in the social sciences, to explore what teachers generally say and do not say when talking in online forums with students and with each other about assigned novels. He will describe his method for analyzing over 5,466 posts and conclude by addressing the additional challenge of explaining the value and practical methods of grounded theory to audiences who are largely unfamiliar with it.
Share on Facebook
December 3rd, 2012
Today I’m giving a lecture about learning environments that promote interdisciplinary dialogue in Internet Science. After 10+ years working in an interdisciplinary space, I take for granted how much easier it has become, I forget the many times I sat through lectures that were like a foreign language where every third word made sense. I also forget how difficult it was to start talking to people in other disciplines, because graduate students already had their cohort, faculty their students, so showing up and not speaking their language meant it took time to be part of their conversations.
During the summer, I convened, along with Cristobal Cobo, Tim Davies, and Ian Brown, a summer school as part of the Network of Excellence in Internet Science. It was a week-long program for early career researchers to engage in interdisciplinary discussions. The topic was privacy, trust, & reputation and each morning, two invited lecturers would discuss these topics from their disciplinary perspectives. We invited faculty from computer science to discuss the technical dimensions of privacy, law professors to explain what is and is not feasible to regulate, and an educational researcher who asks students to draw pictures of who they think connects to them on the Internet. This approach did not seem to ruffle any feathers. However, the afternoon sessions were a more challenging sell.
We blocked out the afternoons for interdisciplinary discussion. I modeled the discussions after seminars offered by UCSB’s CITS PhD emphasis, in which we select a topic, for example “reading,” “trust,” “mobile phones” and graduate students each present research questions and methods their discipline would use to approach the topic. For the summer school, we had to convince colleagues at each stage that it was definitely a good idea to give the participants three hours per day for interdisciplinary discussion. After the first day, we had to convince the participants it was a good idea. By the third day, they were busy talking to each other.
There are some questions that cannot be answered by a single discipline. This challenge existed before the Internet (a recent example provided by Radiolab: why isn’t blue mentioned in The Odyssey and The Iliad?), but like information, communication, even love, the Internet magnifies and accelerates it. Disciplinary silos do not serve Internet research well and really, how could they? How can we approach any experience on the Internet without considering the technical backbone behind it? How can technologies be developed without considering user needs and behaviors? How can we understand the Internet from a purely scientific view without considering the art that makes it work, that people continue to find new ways of using it, that these uses continue to surprise and challenge, that technologies tend to serve other purposes than they were initially intended and that why this happens is worth studying.
Much Internet research has spanned disciplinary boundaries and enabled us to better understand
–why and how people organize online and off,
–which groups are excluded from fully participating in the Internet and why,
–the challenges of protecting personal information, why it might matter and why it might not,
–how knowledge is developed and shared, individually and institutionally
I was motivated to continue attending seminars and lectures outside my field because I had a question I couldn’t answer…I wanted a way to measure students’ online research practices, to understand why they selected certain sources and how they used them. Despite how initially uncomfortable it was for an Education student with no programming background to attend courses in Computer Science, or with a limited quantitative research background to attend Cognitive Psychology courses, I was motivated to keep trying.
Why not interdisciplinary?
As part of the summer school, we asked students what barriers prevented them from pursuing interdisciplinary work. Some said they didn’t really have an interest in working with other disciplines, or didn’t know anyone to collaborate with in their area of interest. Others said that funding was more of a challenge when working across disciplines, or they had experienced difficulty in being accepted to conferences or having articles accepted for publication because their interdisciplinary work did not fit. Others said there was no professional incentive to collaborate because the journals and conferences they would submit to were not viewed as prestigious within their departments. The responses were similar to those I heard in informal conversations as a graduate student. These are fair points. Interdisciplinary work is still in its infancy and despite efforts by AoIR and the increasing number of interdisciplinary journals, publications and professional recognition still seem to be exceptional. However, as demonstrated by initiatives in the digital humanities, in the small, but growing number of interdisciplinary departments, institutions can slowly change the paradigm of recognition.
Where to start?
NSF Interactive Digital Multimedia IGERT
During my graduate years at UCSB, several interdisciplinary opportunities were emerging. The National Science Foundation funded students through their IGERT program to work in interdisciplinary groups to develop projects. IGERT students were drawn from across campus, including computer science, cognitive science, art, and geography. All students were awarded tuition plus a stipend and travel funds and were given a collaborative workspace. The program required that students work in small groups with other IGERT students and attend a Friday seminar on interdisciplinary topics related to their research.
In my experience, while funding was abundant, faculty support never materialized. The Friday lectures always seemed to be on very specialized topics, without an organizing theme. Student projects were generally based on the PIs interest, rather than generated by the students and those not in engineering did not receive as much mentorship. Yet the projects afforded an opportunity to discuss disciplinary approaches and discover complementary questions and methods. Unfortunately, many students encountered difficulties in publishing their interdisciplinary work or being accepted to conferences, so prioritized their disciplinary work.
Cognitive Science PhD emphasis
Based on interest from engineers studying artificial intelligence and geographers trying to get a better sense of how people approach maps, the Department of Psychology started offering a seminar series that combined a few quarterly lectures on interdisciplinary topics with a weekly student seminar. The seminar was particularly targeted toward students and faculty outside of Psychology, so provided relevant background information and an overview of methods at the start of each quarter. Faculty were invited from across campus and usually brought interested graduate students.
The Cognitive Science emphasis allowed graduate students from outside Psychology to take coursework and receive an emphasis on their diploma in Cognitive Science. In addition to the coursework, participants were required to have two members of the interdisciplinary emphasis on their dissertation committee, to complete a research paper or proposal in Cognitive Science, and for cognitive science to be a central focus in their dissertation.
Also on Friday afternoons, the course discussion was lively, often resulting in long disagreements over disciplinary assumptions. Normally, students continued the discussion over drinks or dinner, so the emphasis also succeeded in creating an interdisciplinary community.
Center for Information Technology & Society PhD emphasis
Core faculty from Political Science, Computer Science, Film Studies, Sociology, and English started a lecture series around 1999 that grew into a graduate seminar about four years later. The strength of the seminar was that a core group of faculty was always present, sometimes outnumbering the students, to discuss their discipline’s approach to whatever was the topic of discussion. As mentioned above, early seminars were organized around a theme, for example, mobile phones, and each week two graduate students and sometimes faculty would suggest readings to the group and discuss research questions they would ask around the topic. The faculty modeled respectful dialogue, but pushed each other and the students to challenge their disciplinary assumptions.
The graduate seminar series also evolved into a PhD emphasis, allowing students to receive recognition for coursework completed in the area of Technology & Society. Similar to the Cognitive Science emphasis, the Technology & Society emphasis drew an interdisciplinary faculty steering committee from across campus. In addition to the seminar, coursework included courses in two areas: Culture & History and Society & Behavior. Courses were offered through several departments including Anthropology, Environmental Science and Management, History, Education, and Communication (in addition to the disciplines listed at the beginning of this section). Students were required to have a faculty member from the steering committee be part of their dissertation committee and to complete a dissertation that related to Technology & Society.
The program modeled interdisciplinary dialogue and provided opportunities for students to work on research projects with faculty and students from other departments. In fact a few faculty received grants specifically to foster interdisciplinary collaboration and created strong cohort relations through these research opportunities.
[stay tuned...more to come]
Share on Facebook
April 25th, 2012
Funding bodies are increasingly requiring evidence of impact for higher education efforts in outreach and public engagement, yet measuring this impact is challenging. A review of current practice combined with interviews of public engagement experts in the UK underscored the degree to which outcomes of public engagement and outreach efforts are often not immediately visible, but rather diffuse and developed over time.
This challenge in measuring impact was a main point of conversation on Friday when the OII hosted the judging panel for our first-time European Competition for Best Innovations in University Outreach and Public Engagement, supported by the European Commission as part of the ULab project. The panel consisted of experts in public engagement from the ULab partner countries. Judges represented a range of backgrounds, including funding officers, policymakers, and those engaged in award-winning outreach.
When reporting impact, the majority of entries cited audience size, whether measured by attendance, web traffic, web clicks, or distribution of printed materials. Yet, this reporting usually did not directly measure progress on the stated aims of the projects. For example, if a project aims to change perceptions, or increase engagement with science, simple attendance at a fair or visits to a website cannot show whether these objectives were met. Tracking whether the person then attends another science event, enrolls in a science class, or undertakes a degree in science may provide a stronger measure, but still the extent of influence of a particular event is unknown. Of course, the problem is that indicators of input are more easily and economically gathered, than of impact, which are costly and an additional activity, which may not yield definitive answers.
However, some entries used participant surveys to gather more details about impact and experience. These ranged in depth from like/dislike buttons to short-answer questions about learning. Some projects engaged their participants in focus group discussions aimed at improving their offerings and better meeting the needs of their target audience. A few looked at impacts over time, showing shifts in community engagement, or increases in student enrollment.
The question of depth versus breadth of impact was debated among our judging panel, showing that even experts have diverse opinions of what constitutes models of good practice in public engagement. We found that entries from a number of European countries focused on the numbers of people reached as a measure of success. For example, events such as the Researcher’s Nights hosted throughout Europe aim to engage large numbers of community members in short events and therefore measure success by attendance levels. By way of contrast, several entries from the UK place an emphasis on depth of engagement. In other words, a project may have less than 20 participants, but have weeks or months of interaction that leads to a change in behavior, for example, a demonstrated interest in science, or a better understanding of a different social perspective. Some judges placed more emphasis on breadth, while others focused on depth.
Overall, even in top-ranked entries, the judges found that impact measures were not always ideal or fully developed and recommended that the projects should consider developing more evidence of impact. One of the winning entries, Active Science, demonstrated a multi-faceted approach to impact measures. In addition to reporting on audience attendance, the programme also demonstrated its success through tracing its expansion from a local to nation-wide project. To measure whether the project was successful in shifting perceptions about science, participants were given pre- and post-activity questionnaires that measured their attitudinal approaches.
Matching clearly defined objectives to appropriate measures may improve reporting and evaluation of impact. The wealth of activities and approaches reported in the EngageU entries provide a repository for strong examples of impact measures as well as a means of comparisons for how projects with similar objectives measure their impact.
Of course, you may have very different views on the centrality of impact assessment for outreach. We would appreciate hearing from you.
Share on Facebook
April 25th, 2012
Vicki Nash and I were successful in our bid to the Fell Fund to examine how risk and harm are used in literature addressing children’s use of the Internet. Vera Slavtcheva-Petkova, a recent graduate of Loughborough University, joined us in December and has been making swift progress through the mountain of relevant literature. To date, we have collected nearly 1243 documents, mostly peer-reviewed journal articles, and a solid proportion of reports, working papers, and book excerpts. By incorporating ‘grey literature,’ generated by practitioner groups such as charities and law enforcement with academic articles we aim to develop a more complete understanding of harm as it is experienced by young internet users. Particularly, we want to better understand the frequency and severity of occurrences and use this understanding to evaluate interventions.
To organize ourselves, we are focusing on empirical studies to tease out how and whether harm is operationalized, looking for documented incidents of harm or measures used by practitioners to identify harm. Early on, we realized that interviewing experts in law, law enforcement, psychology, social work, education, and outreach, would lessen our chances of missing key work in the field. In addition, our expert contacts provide context and overviews as we move across different disciplines and perspectives.
Our work has taken us into the areas of cyberbullying, self-harm, and sexual abuse, certainly the darker side of the Internet. We are also exploring current legislation and interventions in several countries to get a sense of successes and challenges in approaching this issue.
At this point, we are 3/4 of the way through our large research corpus and look forward to sharing early observations very soon.
Share on Facebook
January 16th, 2012
During graduate school, I participated in an experimental seminar, “Literature+: Cross-Disciplinary Models of Literary Interpretation,” taught by Alan Liu. He asked students to form groups around topics of their choosing and perform analyses using digital tools on their materials. Most students shared similar research interests and organized their projects around a content-based theme. Our group represented four different disciplines and formed around our interest in digital tools, rather than content. Professor Liu created a toybox of links to various textual analysis tools that generated visualizations, translations, data about word counts, etc. Each of us took a tool in which we were to become “expert,” and applied that tool to data we had collected for our research.
In our recently published book chapter, “Interdisciplinary Knowledge Work: Digital Textual Analysis Tools and Their Collaboration Affordances” our motley team discusses how we applied these digital tools to our research goals and collaborative work. The most important lesson our collaborative experience taught us is that working together both pushed us and liberated us to experiment with our data and methods. In fact, much like our visualizations provide a big picture view of the texts we study, the multidisciplinary nature of our process forced us to step back and view our research at a macro-level. Although our collaboration began as a class project, playing together with technologies led each of us to new and significant understandings of our texts.
Share on Facebook
September 1st, 2011
This post is not going to promise dramatic learning gains from using a new technology. It’s not one of those stories where at first a teacher was skeptical, but in the end, the classroom was like a sports movie where the technology scored the winning homerun. I feel skeptical when I read those stories. I don’t doubt the success, but I wonder whether the learning gains, increased student interest/participation, or higher levels of reported satisfaction have less to do with the iPad, blog, twitter stream, or virtual environment and more to do with who is in the classroom.
Cathy Davidson recently described an idyllic experience of teaching a course in which she and the students shared in the discovery of new applications of technologies for learning. She describes the process of developing the course, the thrill when the students actually invited and facilitated a guest lecture, and the ways in which the students challenged her to really be collaborative, even in grading.
If we step back for a moment, though, and consider a class with Davidson and those same students without the new technologies, what would the learning experience be like? I imagine it would still be exceptional, because Davidson is an obviously engaged teacher and the students are obviously engaged learners. She employs teaching strategies that were effective before the new technologies she describes. In particular, she encourages students to take ownership of their learning experience and creates a flexible environment to support whatever direction they take. When developing assignments, Davidson incorporates research in motivation, particularly students’ likelihood to put more effort into writing for an authentic audience. She also has deep experience with her topic and an obvious enthusiasm for both the content and the teaching. These factors are consistently linked to positive learning experiences in educational research. Additionally, the students clearly seem motivated to learn. She describes the class list as a diverse collection of disciplines, so the students appear to be choosing the course. They demonstrate active involvement with the assignments and content and even provide substantive feedback for future courses.
Davidson’s approach to her class corresponds with much of the research on good teaching. Now, if we imagine the same syllabus and same access to technologies, but with a different teacher, what happens? The course might still be exceptional, or it might not.
A common theme when addressing technology in education is a focus on the particular technology and the success or failure of its use. In David Risher’s recent article about educational technologies in developing countries, he urges consideration of ‘the triangle that connects students, teachers, and ideas.’ The way in which a teacher incorporates a technology, designs the learning environment, and promotes learning determines the ultimate effectiveness of the technology. Returning to Davidson’s example, the students are described as knowing they can take risks through the support and encouragement she provides. The technologies are secondary to the empowering environment she creates. In other hands, the students may have been focused on their screens, updating their Facebook profiles while the teacher lectured at the front of the room…a forgettable experience.
When describing the participatory culture new technologies afford, equally important is the teacher who brings these tools into the classroom — the tool merely plays a supporting role.
Share on Facebook
July 29th, 2011
Last week, the New York Times reported that in 2007, Deutsche Bank entered into an agreement with two German universities, Humboldt University and the Technical University of Berlin, to fund a mathematical laboratory. The problematic parts were the ‘secret’ terms: according to the article, the Deutsche Bank could not only influence the hiring process, but bank employees could serve as adjunct professors. Perhaps the most disturbing aspects of the agreement were that the bank had veto power over the laboratory’s research agenda and, more importantly, “was given the right to review any research produced by members of the Quantitative Products Laboratory 60 days before it was published and could withhold permission for publication for as long as two years.”
The German universities’ decision to accept the funding highlights a global struggle for universities generally. As governments continue to drastically cut budgets of higher education institutions, administrators are forced to make tough decisions that involve seeking external funding and/or making programmatic reductions. For example, the University of California system, after facing nearly a decade of budgetary shortfall was hit this month with additional deep cuts. The university system in California has already made deep reductions and is now asked to cut further. At what point do these decreases in government funding threaten the fabric of the university? Perhaps they already do. So, how do universities maintain their academic integrity and continue to pursue and preserve their research and teaching agendas with continually decreasing resources?
What if the collaboration involving Deutsche Bank becomes a standard model for survival?
Share on Facebook
July 11th, 2011
How often have you walked out of a room and barely remembered anything from a lecture or presentation? It feels that often presentations focus on information transfer rather than connection. If much of what is communicated in academic presentations could easily be done asynchronously, academics are regularly missing an opportunity to connect with and engage their audience.
In this presentation to our Summer Doctoral Program, I use Mayer’s (2001) multimedia learning theory to promote more meaningful (and memorable) presentations. I encourage our visiting students to consider the human element of the presenter-audience relationship by thinking about audience need and how to best communicate their message.
Share on Facebook
May 18th, 2011
Yesterday, I attended Google’s Big Tent Event in Hertfordshire. As an academic, I’m used to attending conferences at universities or Hiltons, not countryside resorts with helicopter pads. The event was held in a grand tent that could easily hold 500 people. It was well-insulated from weather and noise, carpeted, with an extraordinary sound and projection system, consistent and fast wi-fi, comfortable chairs, and to be honest, even the bathrooms were amazing. As I sat in my chair, discovering electrical plugs conveniently located under each seat, I couldn’t help but compare this temporary structure for Google’s few days of publicity events to public classrooms in their home state of California. I’ve conducted teaching observations in many elementary classrooms where 28 students share two or three computers, often less, because one computer isn’t working and a request to fix it may take days or months because budget cuts have resulted in limited staffing. I’ve lectured in university classrooms that either do not have a projector or the projector is broken and again, the fix will take weeks, months to fix because budget cuts have limited technical support. School-wide wi-fi is an unrealized dream at most schools. Even at the university level, a majority of classrooms in California do not have it. California schools’ permanent structures frequently do not have the insulation from weather or noise that Google’s amazing temporary structure boasted.
Google’s Big Tent Event
I wonder what could be possible if teachers had classrooms that functioned as well as Google’s Big Tent? If teachers had the technical and administrative support that benefited yesterday’s speakers, how could students’ learning experiences be improved?
Do the quality of bathrooms reflect the health of an institution? MP Jeremy Hunt may say yes. In his address to the Big Tent group yesterday, Jeremy Hunt drew connections between the vision demonstrated when developing London’s sewage system to current efforts to improve broadband infrastructure in the UK. The secret is the size of the pipes, apparently, and larger ones will ensure preparation for future data demands. Mr. Hunt used South Korea, who is #1 in OECD’s educational rankings, as an example of the success possible with super-fast broadband. While discussing funding models for the project, including private support, Mr. Hunt did not address what would seem an obvious part of the equation: education funding. UK schools have been hard hit by recent budget cuts, including canceling the Building Schools for the Future Scheme and recent £155m additional cuts in the standards fund. If the education rankings of South Korea are serving as justification for investing in infrastructure for faster broadband, it would seem that simultaneously cutting funding for education serves cross-purposes.
While much public debate surrounds the quality of education, often solely blaming teachers, the quality of educational environments, including support at the technical, facilities, and administrative levels need more attention. Let’s use Google’s Big Tent, rather than makeshift shelters, as a model for classrooms and start directing funding to supporting students and their teachers, rather than forcing them to make due without.
Share on Facebook