I am Terra Field, a Staff Platform Engineer at Honeycomb!

Previously I was a Senior Software Engineer at Netflix.

Prior to that, I was a Senior Systems Engineer working on the Data Platform at Blizzard Entertainment.

My work:

My current title is Staff Platform Engineer, though I still think of myself primarily as a Systems Engineer. I glue stuff together to make computers do stuff, mostly with Hadoop, Kafka, & Linux. When everything falls apart, I try to figure out why it broke, fix it, and try to make sure it breaks differently in the future.

What I look for in a team: I am a very social creature. I prefer teams that are collaborative and high communication (even when doing solo projects). Especially when doing remote work, I feel more connected to a team that has a high amount of chat activity - even if a lot of it is just banter instead of specifically work or project related.

Personal life: I live in San Francisco, CA with my partners. When I’m not yelling on the Internet, I’m playing video games, visiting national parks, watching things with my partners, or petting one of our two adorable corgis. My pronouns are she/her. You can find me on Mastodon and LinkedIn.

Roles at Honeycomb (May 2022 - Present)

I recently joined Honeycomb and will be working on scaling and improving their Apache Kafka infrastructure as well as contributing to the overall platform.

Senior Platform Engineer

May 2022 – Present

 

Roles at Netflix (February 2019 - November 2021)

Senior Software Engineer, Operating System/Compute Team

April 2021– November 2021

Created a service to emit metrics and events when the oomkiller is activated on any of our EC2 instances to help right-size instance types and reduce unexpected service restarts

Part of a four person rotation for on-call support for the base Linux operating system on all instances and containers

Leadership, Trans* Employee Resource Group (ERG)

February 2019 - November 2021

Leader of the Education and Talent Acquisition Pods

Worked to advocate for and represent transgender and gender non-conforming individuals in the workplace, increasing awareness, visibility and education about the trans experience

Worked with the I&D department to develop and deliver training on how to be a better colleague to trans and gender non-conforming people

Consulted with content creators on how to improve the representation of trans people both in front of and behind the camera

Senior DevOps Engineer, Personalization Infrastructure

February 2019 - April 2021

Supported a team of 60 developers and researchers in training and deploying machine learning models for personalized recommendations (row selection, box art selection, billboard selection, etc.)

Deployed and maintained Apache Spark clusters consisting of Mesos agents running on tens of thousands of EC2 instances

Implemented lineage and cost tracking, allowing us to track the cost effectiveness and data fidelity of our workflows

Helped develop, test, and deploy a method of sharding Apache Mesos to overcome scaling limitations encountered at 12,000+ node counts

 

Roles at Blizzard Entertainment (January 2015 - January 2019)

Senior Systems Engineer, Big Data

February 2018 - January 2019

Deployed Kafka clusters on Kubernetes using StatefulSets and Metacontroller and securely mirrored production data to them to support moving portions of our Global Data Platform into the public cloud.

Eliminated licensing costs, increased performance and reliability, and enabled new features by planning and executing a zero-downtime migration of our production Kafka clusters that handle 20+ billion events per day from a commercial Kafka distribution to Apache Kafka

Reduced costs and increased operational insight by replacing commercially licensed Kafka monitoring and deployment with a combination of Puppet, Telegraf, Jolokia, and Grafana

Debugged open source and in-house applications in Scala, Java, and Python to assist developers in increasing the performance of their applications and to optimize cluster usage

UNIX Systems Engineer

January 2015 - February 2018

Planned and executed a zero-downtime migration of aging 3 petabyte Hadoop-based data warehouse to newer hardware utilizing Kafka to dual write to both clusters and working directly with our analysts and data scientists to migrate jobs. Increased compute and memory capacity by 400%, allowing us to utilize in-memory computing technologies such as Apache Spark and Impala.

Reduced execution time for business reports from hours to minutes by working with business intelligence teams to migrate jobs from Streaming MapReduce to Hive, Spark, and Impala SQL.

Increased message capacity and saved trillions of application events by migrating existing global fan-in RabbitMQ-based data pipeline to use Kafka, greatly increasing customer confidence in our data pipeline

Significantly contributed to the design, testing, and deployment of a modernized ETL pipeline that made data more compact, performant, and accessible by converting it to Parquet and inserting into Hive

Integral in the design, testing, operation, and deployment of Blizzard’s Telemetry pipeline for monitoring service health for all games and services

Designed and deployed a globally distributed ZooKeeper architecture to support service discovery for Overwatch

Getting in touch with me on social media is probably your best bet, but if you want to send me something privately my contact form is here.