Hi, I’m Ryland. I am a qualitative researcher asking critical questions about technology, the environment, and Silicon Valley ethics and values. I earned an MA in Communication from Simon Fraser University, where I wrote a thesis about the socio-technical frictions that impede effective climate crisis communication on TikTok. I am now a pre-doctoral research assistant at Microsoft Research New England’s Social Media Collective.
At the Social Media Collective, I am the shared research assistant of Mary Gray, Tarleton Gillespie, Nancy Baym and danah boyd. My research topics and responsibilities at the SMC question the ethics and politics of technology, especially generative artificial intelligence. Several projects I have overseen or been credited for research assistance on have been published or presented at venues like the Association of Internet Researchers conference, Sociologica and ACM-Computer Supported Collaborative Work.
I have contributed to numerous interdisciplinary research projects related to environmental communication and digital social science research methods. I am a leader within my graduate school community, having served as Graduate Caucus Co-chair, Student-Faculty Seminar facilitator, as an integral member to the 2022 and 2023 Conduits Graduate Conference Organizing Committees, and as the co-founder of the university-wide Climate Communication Grad Student Working Group. I have a background in documentary film production, and I feel most at home in creative spaces like SFU’s Media & Maker Commons, where I worked as an instructor for two years.
I currently have several research collaborations at various stages of being cooked. In one, we're looking at the types of AI-generated content that AI companies and researchers deem undesirable, and how they developed these categories. In another, we're using STS methods investigate how states are regulating and deploying carbon capture technology infrastructure. Finally, as I'm preparing to enter a PhD program, I've been thinking about the tensions between far-fetched but captivating fears about uncontrollable “artificial general intelligence,” the calls of venture capitalists who decry efforts to build AI safely as being antithetical to societal progress, and Responsible AI practitioners who are strung between these seemingly opposed attitudes. Through all of this work, I try to chart alternative paths forward for a society that neither benefits equally from Silicon Valley’s innovations nor has the vocabulary to imagine more positive futures.