Deep linguistic lensing

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018 === Cataloged from student-sub...

Full description

Bibliographic Details
Main Author: Manna, Amin(Amin A.)
Other Authors: Karthik Dinakar and Roger Levy.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/121630
id ndltd-MIT-oai-dspace.mit.edu-1721.1-121630
record_format oai_dc
spelling ndltd-MIT-oai-dspace.mit.edu-1721.1-1216302019-08-04T03:13:54Z Deep linguistic lensing Manna, Amin(Amin A.) Karthik Dinakar and Roger Levy. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 81-84). Language models and semantic word embeddings have become ubiquitous as sources for machine learning features in a wide range of predictive tasks and real-world applications. We argue that language models trained on a corpus of text can learn the linguistic biases implicit in that corpus. We discuss linguistic biases, or differences in identity and perspective that account for the variation in language use from one speaker to another. We then describe methods to intentionally capture "linguistic lenses": computational representations of these perspectives. We show how the captured lenses can be used to guide machine learning models during training. We define a number of lenses for author-to-author similarity and word-to-word interchangeability. We demonstrate how lenses can be used during training time to imbue language models with perspectives about writing style, or to create lensed language models that learn less linguistic gender bias than their un-lensed counterparts. by Amin Manna. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2019-07-15T20:29:26Z 2019-07-15T20:29:26Z 2018 2018 Thesis https://hdl.handle.net/1721.1/121630 1098174661 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 84 pages application/pdf Massachusetts Institute of Technology
collection NDLTD
language English
format Others
sources NDLTD
topic Electrical Engineering and Computer Science.
spellingShingle Electrical Engineering and Computer Science.
Manna, Amin(Amin A.)
Deep linguistic lensing
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018 === Cataloged from student-submitted PDF version of thesis. === Includes bibliographical references (pages 81-84). === Language models and semantic word embeddings have become ubiquitous as sources for machine learning features in a wide range of predictive tasks and real-world applications. We argue that language models trained on a corpus of text can learn the linguistic biases implicit in that corpus. We discuss linguistic biases, or differences in identity and perspective that account for the variation in language use from one speaker to another. We then describe methods to intentionally capture "linguistic lenses": computational representations of these perspectives. We show how the captured lenses can be used to guide machine learning models during training. We define a number of lenses for author-to-author similarity and word-to-word interchangeability. We demonstrate how lenses can be used during training time to imbue language models with perspectives about writing style, or to create lensed language models that learn less linguistic gender bias than their un-lensed counterparts. === by Amin Manna. === M. Eng. === M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
author2 Karthik Dinakar and Roger Levy.
author_facet Karthik Dinakar and Roger Levy.
Manna, Amin(Amin A.)
author Manna, Amin(Amin A.)
author_sort Manna, Amin(Amin A.)
title Deep linguistic lensing
title_short Deep linguistic lensing
title_full Deep linguistic lensing
title_fullStr Deep linguistic lensing
title_full_unstemmed Deep linguistic lensing
title_sort deep linguistic lensing
publisher Massachusetts Institute of Technology
publishDate 2019
url https://hdl.handle.net/1721.1/121630
work_keys_str_mv AT mannaaminamina deeplinguisticlensing
_version_ 1719232256916062208