About Us
The Future of Life Foundation (FLF) is a new organization, affiliated with the Future of Life Institute, whose mission is to steer transformative technology towards benefiting life and away from extreme large-scale risks
…
1. Our primary plan for doing so is to work to identify gaps in the landscape of efforts to fulfill this mission, and to then determine what new organizations should exist to address these challenges. Once identified, we will recruit, fund, and provide significant support to founders who demonstrate that they can bring these organizations to life.
Our tentative plan is to help create an average of ~3-5 new organizations per year over the next 5 years, providing each with significant operational support and, if recruited founders further develop our research into a compelling proposal, substantial runway.
2. There are other activities that FLF may take on, such as some possibilities for coordination and planning amongst organizations in our field and incubating promising founders. As a new organization, were unsure just how narrow or expansive well find our scope of activities to be.
We expect our most significant focus to be on the risks posed by transformative AI, and governance mechanisms needed to ensure that AI is beneficial to humanity. Other areas of work could include biosecurity, nuclear technology, other risks to humanitys existence or fulfillment of its potential, and the development of tools, institutions, and communities that support these efforts.
About the Role
We are seeking to hire an exceptional researcher specializing in AI safety to help kick off these efforts. In this position, you will play a foundational role in determining what organizations FLF seeks to help create.
We expect to collaboratively determine what research best furthers our mission. Youll play a fundamental part in this, so your experience working in this position will be largely dependent on your take on the relevant topics, as well as your particular interest and ability.
You will initially have both a broad, open remit, as well as specific research direction. We have a preliminary list of ideas for organizations that we feel positively about that require significant further investigation, and we also have a great need to identify and investigate additional potential initiatives. We do believe in short timelines, so were hoping to make progress toward our goals at a rapid pace.
We plan to approach hiring with an open-mind with regard to how candidates interests and background could fit with our work. While were looking for candidates who can be effective when researching across the gamut of AI safety, we encourage those who have more specialized expertise or interest to apply as well.
Activities
– Conduct comprehensive research on the current AI safety landscape
– Identify gaps and potential areas where new organizations can make a significant impact
– Investigate ideas for feasibility, identifying crucial considerations and downside risks
– Engage with AI safety experts, stakeholders, and the broader community to gather insights and feedback
– Analyze the likely impact of proposed organizations toward addressing safety concerns
– Select organizations that we should move forward with, and evaluate potential founders proposals for fulfilling the intended vision
– (possibly) Contribute to the recruitment and selection of suitable founders for new organizations
– (possibly) Provide ongoing guidance, mentorship, and support to founders during the early stages of their organizations
– (possibly) Contribute to or lead research in other FLF cause areas
Likely Qualifications
– Demonstrated expertise and strong judgment with regard to risks from AI
– Proven ability to perform high-quality research, preferably in AI, existential risk, or a related domain
– Strong analytical, critical-thinking, communication, and problem-solving skills
– Passion for helping to further AI safety
Additional Possible Qualifications
– Significant research achievement demonstrated by publications in top-tier conferences and journals coupled with years of pertinent academic experience
– A PhD in computer science or a closely related field
– A track-record of impactful grantmaking
– Experience with the founding of new organizations
– History of performing technical research beyond AI safety
– A particularly strong technical background
Logistics
You will be an employee of the Future of Life Foundation, or our affiliated organization, the Future of Life Institute. Each are 501(c)(3) research non-profits.
– Compensation and Benefits: Compensation is competitive with Bay Area tech roles (excluding equity), plus full benefits.
– Location: We prefer people who are available to work 2-3 days per week in-person (in Campbell, CA), but will consider remote candidates. We should be able to sponsor visas (without going through the visa lottery) for those who need them.
– Applying for both research roles: If interested in our Researcher (General) position as well, youll be able to indicate that in your application. Please submit the application for the role you have strongest interest in. We may modify role definitions and activities when applicable to better suit the talent that we hire.
Miscellaneous
This role is now open on a rolling basis without a deadline, but we are actively assessing candidates that have already applied and may close the role at any time (potentially not evaluating the more recent applications).
We encourage you to apply even if your background may not seem like a great fit! We would rather review a larger pool of applications than risk missing out on a promising candidate for the position.
If you have any questions about the role, please do get in touch at [email protected].
We are committed to diversity and equal opportunity in all aspects of our hiring process. We do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We welcome and encourage all qualified candidates to apply for our open positions