Resources
Brodeur AI Guidelines
At Brodeur, we take a human-led approach to AI, ensuring that no AI-augmented or -generated content is ever delivered to a client or shared externally without human scrutiny. Below are public facing principles that guide our utilization of AI. The scope of these principles covers all our client facing work and deliverables. They apply to all Brodeur employees and serve as context to vendors with whom we engage. They do not act as a legal contract or full AI policy.
BRODEUR AI GUIDING PRINCIPLES
Continuous Learning: As an innovative company, we will continue to experiment and selectively adopt AI tools that genuinely improve the quality and efficiency of our work. We believe in the power of AI but do not pursue automation as an end goal itself. Rather, we prioritize technology that acts as a force multiplier for our human expertise.
Appropriate AI Tasks: While AI may be used in our processes, experts are always the final authors and strategic owners of every deliverable. We also restrict the use of AI to appropriate tasks. Here is how we delineate:
-
Low-to-Medium Risk Tasks: We may use AI to support our content creation efforts, primarily to review and enhance concepts developed by our team members. ated by our team, even when assisted by AI. Examples include: idea generation, language polishing, or early drafts.
-
High-Stakes Strategy: AI is never used to make strategic decisions regarding your brand positioning, crisis response, or reputation management. Human judgment remains the sole driver for all high-value advice and sensitive communications. Examples include: crisis statements, legal communications, and regulated industry content.
“Human-in-the-Loop” Mandate: No AI-augmented or generated content is ever delivered to a client or shared externally without rigorous human oversight. No unreviewed, unedited AI output is ever developed or sent to a client.
-
Verification: AI outputs are treated as drafts, not authoritative facts. We independently verify all statistics, quotes, and data points against trusted primary sources.
-
Quality Control: Our team produces every deliverable to ensure it meets your specific brand voice, tone, and strategic objectives.
Data Privacy and Confidentiality: Protecting your proprietary information is a baseline requirement for us.
-
Input Governance: We do not input non-public, sensitive, or “inside” information into public AI models. In this case, “public AI models” means consumer tools used via public websites or apps, without dedicated enterprise data protections or no-training commitments.
-
Secure Workflows: We follow data-protection practices designed to minimize the exposure of your intellectual property and ensure your data is never used to train public LLMs. Where we use enterprise AI services, we do so under contracts that prohibit training on client data and include appropriate security and confidentiality protections.
Use Transparency: We believe in a transparent approach to the technology we use in our workflows.
-
Disclosure: We will call out any use of AI that materially impacts the ultimate client deliverable and are always happy to discuss in more detail our usage of AI in any way. For instance, we will disclose when AI imagery or video has been used in a potentially public-facing asset.
-
Proactive Consultation: If a project involves a significant or novel use of AI beyond routine drafting, we will discuss this with you in advance to set appropriate boundaries.
Respect for Intellectual Property and Ethical Use: We are committed to the responsible and ethical utilization of technology.
-
IP Protection: We take all reasonable steps to avoid using AI in ways that could infringe upon copyrights or replicate protected content.
-
Ethical Use: Our team provides continuous human validation to verify accuracy, mitigate bias, and adjust for performance changes as these tools and workflows update.
