If you’re among the growing number of professionals relying on the AI-powered note-taking application Granola to streamline your meeting summaries, an immediate review of your privacy settings is highly advisable. Despite Granola’s assertion on its security page that your notes are “private by default,” a closer look reveals a critical distinction: these notes are, in fact, viewable by anyone possessing a direct link. This default setting, coupled with the application’s practice of utilizing user data for internal AI model training unless explicitly opted out, presents significant privacy considerations that warrant user attention.
Understanding Granola’s Core Functionality and Value Proposition
Granola positions itself as an “AI notepad for people in back-to-back meetings,” a tool designed to alleviate the burden of manual note-taking in a fast-paced corporate environment. Its appeal lies in its seamless integration with users’ calendars, allowing it to capture audio from scheduled meetings. This audio is then intelligently processed by AI algorithms, which generate concise, bulleted summaries – referred to as “notes” – of the discussion. This core feature saves valuable time, ensures key points are captured, and provides an accessible record of conversations.
Beyond simple summarization, Granola offers a suite of collaborative and analytical functionalities. Users can actively edit the AI-generated notes, refining them for accuracy or adding personal insights. The platform also supports collaboration, enabling users to invite others to view and potentially contribute to shared notes, fostering team alignment and information dissemination. Furthermore, Granola incorporates an AI assistant, empowering users to query their notes, extract specific information, and review the comprehensive meeting transcripts from which the summaries are derived. This robust feature set makes Granola an attractive solution for individuals and teams seeking to optimize their meeting workflows and knowledge management.
The “Private by Default” Paradox: A Deep Dive into Link Sharing Vulnerability
The most striking privacy concern within Granola arises from the fundamental mismatch between its stated “private by default” policy and its practical implementation. While the term “private by default” typically implies restricted access that requires explicit permission for sharing, Granola’s application settings menu explicitly clarifies, “By default, your notes are viewable to anyone with the link.” This distinction is not merely semantic; it represents a significant security loophole that could expose sensitive information.
The implication is profound: any Granola note, immediately upon creation, is assigned a unique, public URL. If this link is inadvertently shared, leaked, or even guessed, the content of that note becomes accessible to anyone on the internet, regardless of whether they have a Granola account or are even associated with the user’s organization. This poses a potentially catastrophic risk, especially for users who frequently record meetings containing highly confidential information, such as strategic business plans, financial data, intellectual property discussions, legal consultations, or sensitive HR matters.
To demonstrate this vulnerability, independent testing has confirmed that accessing a Granola note via its public link from a private browser window (incognito mode) requires no login whatsoever. The note’s content is fully displayed, and in many cases, the interface even reveals the name of the note’s owner and its creation timestamp, adding another layer of identifiable information to potentially exposed data. This lack of authentication for link-shared notes means that the security of your information hinges entirely on the absolute secrecy of the URL – a notoriously difficult and unreliable security measure in the digital age. Accidental sharing could occur through a simple copy-paste error into a public chat, an email sent to the wrong recipient, or even through insecure browser history. The potential for a “bad actor” to stumble upon or deliberately seek out such links, while perhaps low probability for a single note, becomes a non-trivial risk across a large user base.
The Nuances of Transcript Access
While the notes themselves are immediately accessible via public links, the situation with full meeting transcripts presents a slightly more ambiguous picture. When viewing a public note, an individual with the link can select specific bullet points generated by Granola. This action often pulls up a direct quote from the underlying transcript that the bullet point refers to, accompanied by an AI-generated summary providing additional context. This partial access still carries significant risk, as even snippets of a conversation can reveal sensitive details.
Granola’s website states that “full transcript access is available to collaborators who open the same folder or note inside the Granola desktop app.” However, the critical ambiguity lies in the definition of “collaborators.” Does this term refer exclusively to individuals explicitly invited by the note owner to a specific workspace, or does it extend more broadly to anyone within the same company domain who uses Granola? Or, even more concerningly, could it imply that any Granola account holder, upon discovering a public link, might gain access to the full transcript if they happen to open it within their desktop application? Granola’s lack of clarity on this crucial point, and its unresponsiveness to inquiries for more information, leaves a significant gap in understanding the true scope of potential data exposure. The implications of unintended full transcript access are far more severe than partial access, as transcripts contain every spoken word, offering a complete and unredacted record of a meeting.
Empowering Users: Adjusting Granola’s Privacy Settings
Fortunately, Granola does provide users with the ability to modify their default link-sharing settings, offering a crucial pathway to enhance data security. It is imperative for all Granola users, particularly those handling sensitive information, to review and adjust these settings without delay.
To change who can view your notes via a link:
- Open the Granola application.
- Locate your profile: This is typically found in the bottom-left corner of the screen.
- Access Settings: Click on your profile, then select “Settings” from the ensuing menu.
- Navigate to “Default link sharing”: Within the settings panel, find the option labeled “Default link sharing.”
- Modify the default: The default setting is “Anyone with the link.” Users should change this to either:
- “Only my company”: This option restricts access to individuals within your organizational domain, assuming your company uses Granola and is properly configured. While more secure than public access, it still means colleagues could potentially view notes not explicitly shared with them if they have the link.
- “Private”: This is the most secure option, ensuring that only you (the note owner) can access the note unless you explicitly share it with specific collaborators within the Granola app. This setting prevents accidental public exposure through a leaked link.
It’s also important to note that if a note is deleted from your Granola account, any previously shared links to that note will cease to function, effectively removing access to its content. This provides a mechanism for remediation in cases of accidental exposure, though proactive prevention through correct privacy settings is always preferable.
Historical Precedent and Real-World Security Concerns
The privacy concerns surrounding Granola’s default link-sharing settings are not entirely new. As far back as last year, a user on LinkedIn, Nik Gupta, publicly highlighted this issue, cautioning that while these public links are “not indexed” by search engines (meaning they won’t typically appear in Google search results), if a link is “shared or leak[ed] – even accidentally – it’s public to whoever finds it.” This underscores the point that non-indexed links do not equate to private or secure links; they merely reduce the likelihood of discovery through general web searches, but remain fully accessible to anyone with the exact URL.
The real-world implications of such default settings are substantial, particularly in enterprise environments. A source close to The Verge revealed that at least one major corporation has already prohibited a senior executive from using Granola due to these very security concerns. This corporate decision highlights the critical importance of data governance, regulatory compliance (such as GDPR, HIPAA, CCPA), and the protection of intellectual property. For companies dealing with sensitive client data, proprietary research, or strategic discussions, the risk of accidental information disclosure through a tool with such default settings is simply too high to tolerate. This incident serves as a stark reminder that convenience offered by AI tools must always be weighed against robust security protocols.
AI Training and Data Usage: Another Layer of Privacy Considerations
Beyond the link-sharing vulnerability, Granola’s approach to using user data for AI model improvement introduces another significant privacy consideration. According to the app’s support page on “Model Training,” Granola “may use anonymized data” to enhance its underlying AI models. While the term “anonymized” suggests data stripped of personally identifiable information, the effectiveness and irreversibility of anonymization techniques are often debated and can be challenging to guarantee entirely, especially with rich, contextual data from meetings.
Crucially, Granola distinguishes between different user plans regarding this practice. Enterprise customers are automatically opted out of AI training by default, reflecting a recognition of their heightened privacy requirements. However, individuals on all other plans are not opted out by default. This means that if you are using Granola under a standard or personal plan, your meeting notes and transcripts are likely contributing to Granola’s AI model development unless you take specific action.
Users can disable this AI training by navigating to the settings menu and toggling off the option labeled “Use my data to improve models for everyone.” Granola assures users that even if this setting is enabled, it does not permit third-party AI companies, such as OpenAI or Anthropic, to directly use your data for their own AI training. However, the data is still used internally by Granola itself. This distinction is important for users to understand, as it affects the extent to which their confidential conversations might inadvertently contribute to the development of AI technologies.
Granola’s Stated Security Infrastructure
Despite the discussed privacy concerns surrounding default settings and data usage, Granola does outline several technical security measures designed to protect user data. According to its security page, the company stores all user notes within a US-hosted Amazon Web Services (AWS) private cloud. This choice of infrastructure typically implies a robust, scalable, and secure environment. Granola further states that all notes are “encrypted at rest and in transit.”
- Encryption at rest means that data stored on Granola’s servers (in the AWS cloud) is encrypted, making it unreadable to unauthorized parties who might gain physical access to the storage infrastructure.
- Encryption in transit signifies that data exchanged between your device and Granola’s servers, as well as between different components of Granola’s system, is encrypted. This protects information from interception during transmission.
Another notable point is that Granola explicitly states it “doesn’t store audio from meetings.” Instead, it only saves the processed meeting notes and the corresponding transcripts, both of which are handled and processed in the cloud. This reduces the risk associated with storing raw audio files, which can be larger and potentially contain more nuanced sensitive information. While these technical safeguards are industry standard and commendable, they primarily address threats like data breaches or unauthorized access to Granola’s backend systems. They do not, however, inherently mitigate the risks posed by a default setting that makes notes publicly accessible via a simple link, or the implications of data being used for AI training without explicit opt-in. Users must understand that even encrypted data can be exposed if the access mechanism (the public link) is not properly secured or configured.
Broader Implications for AI Tool Adoption and Digital Hygiene
The Granola case serves as a crucial case study for the broader adoption of AI-powered tools in professional settings. While these tools offer undeniable benefits in productivity and efficiency, users and organizations must exercise extreme caution and diligence regarding data privacy and security. The “private by default” misnomer highlights a common challenge: a company’s interpretation of privacy may differ significantly from user expectations.
Users are encouraged to adopt a proactive approach to their digital hygiene when integrating new AI applications into their workflow. This includes:
- Always scrutinizing default settings: Never assume that the most secure or privacy-preserving option is the default. Always check and configure settings to align with your personal or organizational privacy policies.
- Reading terms of service and privacy policies: While often lengthy, these documents contain vital information about how your data is collected, stored, processed, and shared. Pay particular attention to sections on data ownership, anonymization, and third-party access.
- Understanding data flow: Be aware of what types of data (audio, text, metadata) are being captured, where they are stored, and for what purposes they are used.
- Implementing corporate IT policies: Organizations should develop clear policies for the use of third-party AI tools, including security assessments, approved vendor lists, and guidelines for handling sensitive information.
- Advocating for transparency: As users, demanding greater transparency from AI tool developers regarding their data practices and clearer, more user-friendly privacy controls is essential.
Conclusion
The Granola AI note-taking app, while innovative and beneficial for meeting productivity, presents significant privacy challenges stemming from its default settings. The core issue lies in the fact that notes are “viewable to anyone with a link” by default, contradicting the common understanding of “private by default.” This design choice creates a considerable risk of accidental exposure for sensitive meeting information, as confirmed by independent testing. Furthermore, the ambiguity surrounding full transcript access for collaborators and the default opt-in for AI model training for non-enterprise users add further layers of concern.
While Granola employs industry-standard security measures like AWS hosting and encryption, these do not negate the user’s responsibility to actively manage their privacy settings. It is paramount for all Granola users to immediately navigate to their app settings and change the “Default link sharing” option from “Anyone with the link” to either “Only my company” or, preferably, “Private.” Additionally, users should consider opting out of AI model training if they wish to prevent their anonymized data from contributing to Granola’s AI development. The experiences of a LinkedIn user highlighting these issues and a major company denying its executive the use of the tool due to security concerns underscore the real-world implications. In an era where AI tools are rapidly integrating into our professional lives, vigilance, proactive privacy management, and a critical evaluation of default settings are no longer optional but essential for safeguarding sensitive information.
Post Views: 1

