AI, Ethics and Activism: Bentley-Gallup Grants Fund New Faculty Research
What does the public expect from business — and how are corporations measuring up?
Since 2022, the annual Bentley University–Gallup Business in Society survey has asked Americans these questions. Year after year, results reveal shifting expectations and growing uncertainty about how businesses should respond to social, economic and technological challenges.
Findings from the 2025 Bentley-Gallup report show that the public increasingly looks to businesses for leadership, despite questioning corporate motives:
- 87% of Americans say that businesses have the power to improve lives. Yet just 60% believe they’re using that power effectively, and only 43% say they fully trust their intentions.
- 51% of Americans now say companies should take a public stance, up 13 points from the previous year. At the same time, 60% want their own employers to remain silent.
- 31% of Americans say they trust businesses to use AI responsibly, up from 21% in 2023. However, nearly as many (28%) say they don’t trust them at all.
This year, five Bentley faculty members are exploring these tensions — and what companies can do to address them. Their newly funded research projects examine how issues like AI, corporate activism and public trust are shaping the future of corporate leadership and governance.
Read on to learn more about their projects and the questions guiding their work.
Researcher: Aaron Ancell, Assistant Professor of Philosophy
As companies increasingly weigh in on social and political issues, a central question has emerged: Who, exactly, are businesses speaking for?
Businesses often claim to be speaking for various stakeholders, such as employees, customers and the communities in which they operate. Yet these groups often hold sharply different — and sometimes conflicting — views on controversial topics, complicating questions of representation and legitimacy.
Ancell’s study aims to create a framework for determining when — and under what conditions — a company can credibly claim to speak on behalf of its stakeholders. Drawing from political philosophy and research on nonprofit advocacy, he considers whether businesses, like NGOs or other mission-driven organizations, should meet heightened standards of transparency and accountability before taking a public stand.
Ultimately, Ancell’s research will offer guidance for companies seeking to engage thoughtfully and responsibly in public debates.
Researcher: Rodrigo B. DeMello, Tenure-Track Associate Professor of Management
The growing influence of generative AI (Gen AI) has focused attention on the people behind the platforms. Because these systems are trained on data selected by humans, their outputs can reflect cultural, racial or gender-based stereotypes — raising questions about fairness and accuracy.
Jointly developed with Rosana Dangui, a visiting international PhD student, DeMello’s research examines political bias, exploring whether Gen AI models are influenced by the political activism of CEOs. They will prompt the models with politically charged questions and analyze the responses for signs of political bias. Afterward, using public statements for reference, they will measure whether this bias is aligned with the political activism of the CEO.
The study also tests whether governance tools can reduce bias. DeMello and Dangui will rerun the experiments with prompts designed to encourage fairness, accountability, transparency and ethical reasoning in the model’s responses. They will then assess whether these adjustments reduce partisan tendencies in the AI outputs.
As Gen AI becomes more deeply embedded in society, DeMello and Dangui’s research highlights the importance of transparency, accountability and oversight to ensure these technologies strengthen — rather than undermine — democratic values and public trust.
Researcher: Christine Liu, Assistant Professor of Accounting
Public skepticism about corporate AI claims remains high, yet venture capital continues to flow heavily into AI-driven startups. Why does this disconnect matter? If investor confidence outpaces corporate capabilities, it could lead to misallocated resources, inflated valuations and AI products that fail to deliver on their promises.
Liu’s study seeks to explain the gap between public and investor perspectives. Using novel empirical measures derived from publicly available data, she evaluates the extent to which AI startups’ stated capabilities reflect their actual technical substance.
She then examines whether venture capital funding decisions align with startups’ demonstrated capabilities or are instead driven by promotional narratives. Amid growing concerns about potential AI investment bubbles, Liu’s research offers a data-driven framework for assessing corporate AI claims. Her findings can inform policy discussions about disclosure requirements and investor protections.
Researcher: Zhizhen Lu, Assistant Professor of Global Studies
Employee expectations have become a powerful force in shaping corporate activism. While research shows that workers’ partisanship shapes their demand for employers’ public political stances, few studies have examined whether ideologically motivated activism is still strong when it potentially harms employees’ self-interests.
Lu analyzes whether and how self-interests moderate employees’ ideological activism in the context of H-1B visa — a category for companies to hire high-skilled foreign talent. Recently, H-1B visas caught public attention due to a major policy change that now requires companies to pay a $100,000 fee per sponsored applicant.
While employees with globalist ideologies support immigration in general, it is still unclear whether their stance is consistent when bringing skilled foreign workers to intensify job market competition. In a survey of 2,000 working adults, participants will read one of two corporate statements on immigration. One takes a nationalist position, highlighting protections for existing employees. The other strikes a globalist tone, pledging support for flexible immigration policies. Afterward, participants will be asked to sign a petition demanding their companies to take a public stance and political actions on H-1B visas. This behavioral outcome will then be matched with participants’ revealed preferences for globalization or nationalism, based on their views on tariff impacts and consumption of US-origin products.
Together, these responses reveal the conditions under which employees drive corporate activism agendas — and when economic self‑interest prompts them to withhold such demands. Lu’s findings will provide timely guidance to companies navigating rising employee activism, political polarization and global labor competition.
Researcher: Shawn Ogunseye, Assistant Professor of Computer Information Systems
As businesses race to adopt AI, consumers increasingly question whether they can trust companies to use the technology responsibly. Strong AI governance — which signals that an organization has systems in place to ensure transparency, fairness and accountability — plays a critical role in shaping public confidence.
Ogunseye’s research takes a closer look at what he calls the “custodian deficit”: the gap between the level of responsibility the public expects from companies using AI and the limited oversight reflected in their AI governance practices.
He explores this divide by identifying five leadership roles and how their focus areas influence public perception. Builders and Strategists drive innovation and set organizational direction, while Translators and Performers turn ideas into operational systems. The fifth role — the Custodian — is responsible for identifying and operationalizing relevant AI governance policies. Ogunseye finds that while most organizations invest heavily in innovation and execution, custodianship is often an afterthought.
By linking AI governance quality to organizational legitimacy, Ogunseye’s research makes clear that long-term trust in AI requires leadership structures that prioritize responsibility alongside innovation.