September 11, 2019

Facebook hosts Fellowship Summit 2019

By: Meta Research

This week, Facebook hosted the Fellowship Summit at our headquarters in Menlo Park. The annual summit is an opportunity for Fellows to share their work and connect with the company’s broader research community. Several researchers, recruiters, and program managers participated in the two-day event and gave Fellows an inside look at what it’s like to work at Facebook. For the full agenda and speaker bios, visit the event website.

Sharon Ayalde, Research Program Manager for the Fellowship program, emceed the summit and helped craft the Fellows’ experience visiting Facebook HQ. She notes the year-on-year growth of the program, leading to an increase in diverse applications from top talent around the world. “This year is the most diverse in terms of gender and cultural background,” she says. “We’re proud to support so many new Fellows passionate about making a positive social impact.”

We had a chance to catch up with a few Fellows at the summit to learn more about their research, current projects, and academic backgrounds. From improving health technologies in Ethiopia to maintaining election integrity in Latin America, these Fellows showcase a diverse range of research interests spanning natural language processing, computer vision, machine learning, human-computer interaction and well-being, and computational social science.

Here is a little more about them and their research.

Building machine translation systems from monolingual corpora

Mikel Artetxe is a PhD student at the University of the Basque Country, advised by Eneko Agirre and Gorka Labaka. Mikel also earned his bachelor’s degree in computer science and his master’s in natural language processing (NLP) from the University of the Basque Country.

NLP combines Mikel’s two passions: language and computer science. “I’ve always been interested in language,” Mikel says. Being from the Basque Country, Mikel is a speaker of both Spanish and Basque, a minority language with only about 750,000 native speakers worldwide. “Basque is unique because it’s not related to any other existing language, and its origin remains a mystery,” he says. “And so that made me interested in languages in general and the Basque language in particular.”

Mikel’s main research topic within NLP is unsupervised machine translation. According to Mikel, the basic principle of supervised machine translation is to learn translation patterns from a parallel corpus (a set of existing translations). In contrast, unsupervised machine translation works with non-parallel monolingual corpora (a collection of different texts in each language, without any existing translations).

“It’s as if I give you a bunch of books in Chinese and a bunch of different books in Swahili, and ask you to learn to translate between these two languages without any additional help. Unless you knew Chinese and Swahili beforehand, that would seem impossible for a human being, wouldn’t it?” Mikel says. “My work with my advisers Eneko Agirre and Gorka Labaka, and FAIR researcher Kyunghyun Cho, shows that a machine can do that, and it can actually do pretty well.”

In his most recent work, which was published at ACL 2019, Mikel and his advisers reached the level of the best conventional machine translation system from five years ago in English-German. “The conventional system relied on millions of existing translations, but ours does not need any!” Mikel exclaims.

Mikel recently spent some time as an intern at FAIR Paris with Holger Schwenk, where they worked on multilingual sentence embeddings. Regarding his experience at Facebook, Mikel notes the freedom to pursue open research questions, as well as the culture of openness. “The research that you do actually impacts many people,” he says, “and I think that’s great—especially since our work was also released as open source.”

To learn more about Mikel and his research in unsupervised machine translation, visit his website.

Using crowdsourcing and AI to empower citizens to counteract misinformation campaigns

Claudia Flores-Saviaga is a PhD student in the Human-Computer Interaction (HCI) Lab at West Virginia University, where she is advised by Dr. Saiph Savage. Currently, her research involves analyzing how misinformation campaigns are created and how they are spread on and off social media, particularly in Latin America. Her goal is to create crowdsourcing systems that can help mobilize citizens to counteract these campaigns.

Before Claudia started her PhD, she worked for the government in Mexico as a technology adviser for the governor. Part of her job was to analyze how citizens interact with the government and non-governmental organizations, and within social media in general, in regard to political events and decisions.

This experience led Claudia to pursue a PhD focused on applying AI to her work in computational engineering. “Right now, what I’ve been doing is analyzing different political campaigns, especially to understand how political trolls have been able to mobilize other citizens to produce computational propaganda,” she explains. Using AI and crowdsourcing, Claudia seeks to design new systems that can coordinate citizens to detect misinformation campaigns themselves, and therefore stop these campaigns from spreading.

Facebook recently invited Claudia to speak at the Computer Vision for Global Challenges Workshop at CVPR in June. Her presentation addressed the problems particular to countries outside of the U.S. and Europe. “It’s important to understand that in Latin America, these campaigns play out a little differently than in the U.S.,” Claudia explains. “In Latin America, a lot of people don’t have a reliable internet connection, so political trolls spread disinformation offline as well.”

Her presentation also touched on the difficulty of creating a system that could automatically detect what is and isn’t a computational propaganda campaign. According to Claudia, although applying AI is important, it’s still necessary to consult people who understand political nuances better than machines do — which is where crowdsourcing comes into play.

In her free time, Claudia collaborates with institutions such as the Organization of American States (OAS) and Mexico’s federal government to make real-world impact in Latin America with her research. Claudia also likes to collaborate with local universities in Mexico, such as the National Autonomous University of Mexico, to conduct workshops with students. She hopes to help raise awareness of all the opportunities available in the areas of AI and crowdsourcing. “I want them to know that you can create an impact no matter where you are,” Claudia says.

To learn more about Claudia and her published research, visit her website.

Automating medical diagnoses with machine learning in rural Ethiopia

Yaecob Girmay Gezahegn is a PhD candidate at the Mekelle Institute of Technology – Mekelle University, his home institution in Ethiopia. He is co-advised by Professor Achim Ibenthal and Dr. Eneyew Adugna in a joint program between Addis Ababa Institute of Technology and HAWK University of Applied Sciences and Arts, Faculty of Technology.

Yaecob is currently investigating the detection and recognition of images of microscopic infectious diseases using machine learning techniques. His main focus is to design a robust medical image processing and machine learning–based mobile diagnosis unit that can be used in remote, underserved rural areas to automate medical diagnoses. A standalone medical diagnosis unit, according to Yaecob, could also assist less-experienced medical professionals in diagnosing patients.

Yaecob’s research in improving health technologies is motivated by a desire to help his community and those in need. “Around 80 percent of the community I am from lives in a geographically dispersed rural area where it’s hard to provide health-care services properly and promptly,” Yaecob tells us. “What’s more, well-trained health professionals are scarce, the health-care infrastructure is insufficient, and the annual income of the society is low.”

With a background in signal processing and medical image processing, Yaecob was brought to his current field of research when he realized machine learning-based algorithms are generic and could be applied to the more time-consuming health diagnosis challenges, especially those having to do with widely spread diseases that affect millions of lives every year.

Yaecob and his team’s research aims to introduce medical image digitization to the Ethiopian health industry, which includes the collection and storage of image data sets. “At first, I thought I could get medical images in clinics or hospitals, but microscopic images are hard to get,” Yaecob says. “I visited all the well-known hospitals and health research institutions in Ethiopia in search of digital data sets of the infectious diseases — but I couldn’t get even a single image.”

This meant Yaecob had to push his research a little bit further. In collaboration with the Tigrai Health Research Institute, he collected blood smear slides of these infectious diseases, bought his own digital microscopes, and formed a team of two parasitologists, two microbiologists, and a medical doctor. “As of now I have collected more than 10,000 microscopic images of infectious diseases, including tuberculosis and leishmania, a neglected tropical disease,” he tells us.

Although he was unable to attend the summit, Yaecob hopes the fellowship will lead to meaningful connections with other seasoned and experienced professionals.

Using HCI to improve health technologies and combat health inequities for women of color

Vanessa Oguamanam is a computer science PhD candidate at the Georgia Institute of Technology, where she is co-advised by Dr. Betsy DiSalvo and Dr. Neha Kumar. She holds a bachelor’s degree in computer science and a master’s degree in HCI from the University of Maryland, College Park.

Vanessa’s research sits at the intersection of ubiquitous computing, HCI, and underserved communities. Her current work explores how black women across different age groups and socioeconomic statuses define and practice wellness in the local south. Particularly, Vanessa is investigating how technology shapes wellness definitions and practices for these women, and how culture influences the adoption and use of consumer wellness technologies for them. With this work, Vanessa will contribute design considerations and guidelines for culturally relevant m-health interventions for black women in the local south.

Vanessa finds passion in work supporting untapped communities worldwide. As part of her capstone project for her master’s, Vanessa co-founded She Hacks Africa (SHA) with the Working to Advance STEM Education for African Women Foundation. SHA is an intensive coding camp based in Nigeria that is designed for, but not limited to, young women in Africa. Participants between the ages of 17 and 34 receive training in computing skills, including web and mobile development, entrepreneurship, and design thinking, so that they can build applications that address issues within their communities.

Additionally, Vanessa is active in the academic community and recently attended the CRA Grad Cohort for Underrepresented Minorities and Persons with Disabilities Workshop in Waikoloa; Richard Tapia Celebration of Diversity in Computing in San Diego; the Race & Biomedicine Working Group Retreat in Serenbe, Georgia; Health Systems: The Next Generation 2018 in Atlanta; and Summit 21, also in Atlanta.

According to Vanessa, she was inspired to pursue this field of research so that she could develop m-health interventions to help combat health inequities, particularly within racial and ethnic minority communities. She applied for the Facebook Fellowship program after learning that Instagram conducts research related to well-being. “Today, we see just how being on social networks, such as Facebook and Instagram, can impact one’s overall well-being,” Vanessa says. “I hope to be able to draw from the current work being done at Instagram as well as share insights from my research to help them further their own work.”

Learn more about Vanessa and her research on her website.

Teaching computers to see the world through video understanding

Chao-Yuan Wu is a computer science PhD candidate at the University of Texas at Austin, working with Philipp Krähenbühl. His current research focuses on computer vision and, in particular, video understanding. “A central goal of computer vision is to have machines see the world and make sense of it just like how we do with our eyes,” Chao-Yuan explains. “Video understanding is in contrast to image understanding, which looks at just one image. I’m interested in getting a computer to look at a sequence of visual patterns and then try to make sense of it.

“You can think of it like watching a movie,” he continues. “We don’t make sense of it with just one scene; we need to watch the whole thing in reference to the present, the past, and the future. My goal is to design machines that can do this.”

Before Chao-Yuan applied for a Facebook Fellowship, he was an intern at Facebook AI Research (FAIR). With Ross Girshick as his mentor, Chao-Yuan worked on projects related to video understanding. One project was about enabling models to do long-term reasoning, and another sought to improve the efficiency of video understanding algorithms.

Regarding his experience working at FAIR, Chao-Yuan mentions the freedom and openness built into the team. “They’re working on some of the most important research problems in the field of computer vision, and these problems are driven by curiosity as opposed to an industrial goal,” he says. “I really liked this free, curiosity-driven research style.”

Recent advancements in artificial intelligence have given rise to new applications in various fields of research, including computer vision. This means there are plenty of new and unexplored problems to solve, which creates a lot of potential for positive impact. According to Chao-Yuan, his research is driven by a desire to make the world a better place. “If a machine can make sense of the visual world, a self-driving car can drive safer,” he explains. “We can have better health-care systems, like understanding the elderly and helping them with some robotic assistance that understands humans and can interact with humans effectively.

“In order for an intelligent agent to do something useful and meaningful,” he continues, “being able to see first is an important part. This is why video understanding is an important problem to solve.”