The Closed-Loop Expert
Government agencies, hospitality, corporate — mostly IT officers
Hello everyone — I'm David Panonce, CTO and Co-founder of FrontierAI. We're an AI consultancy focused on deploying autonomous agent systems for enterprises.
I'm also the Co-lead of AI Pilipinas Cebu, where we foster the local AI ecosystem through mentorship and speaking engagements.
Today I'll be sharing practical tools and strategies you can take back to your organizations immediately.
Good morning everyone. I'm David Panonce, CTO of FrontierAI.
Before we start — quick show of hands: how many of you have used ChatGPT or any AI chatbot at work, even once? (pause for hands)
Now keep your hand up if you were 100% confident the answer it gave you was accurate. (most hands drop)
That's exactly why we're here. In government and regulated industries, one leak, one hallucination, one wrong answer can cost careers — or worse, compromise public trust.
Today I'm going to show you a tool called NotebookLM from Google. Think of it as a private librarian that only reads your books. It doesn't browse the internet. It doesn't make things up from training data. It only answers from the documents YOU give it.
This session is about 20-25 minutes, and by the end, you'll see me turn a 100-page document into a 10-minute podcast. But first — let's talk about the problem.
Let me paint the picture. Your agency or organization has hundreds — maybe thousands — of policy documents. HR handbooks, procurement guidelines, service-level agreements, regulatory compliance documents.
Nobody reads them cover to cover. Be honest — when was the last time someone in your office read the entire procurement manual? (pause for laughs)
So what happens? People ask each other. They guess. They Google it. Or worse — they paste the question into ChatGPT.
Here's the problem with that: a study from arXiv found that ChatGPT hallucinates — meaning it confidently makes up answers — about 40% of the time when answering questions about specific documents. Forty percent.
Now imagine that 40% error rate applied to a government contract. Or an SLA with a vendor. Or an HR policy about termination procedures. One wrong answer, cited as if it were gospel, and you've got a legal problem.
(For government audience) In your world, a misinterpreted policy isn't just embarrassing — it can trigger audit findings, legal challenges, or worse.
(For hospitality audience) In hospitality, if your front desk staff gets wrong information about a franchise agreement or health protocol, that's a compliance violation waiting to happen.
The core problem: information overload plus AI that confidently lies.
Let me be more specific about hallucination, because this matters for IT officers making tool recommendations.
When we tested general-purpose AI like ChatGPT and Gemini on document-specific questions, the hallucination rate was around 40%. These tools are trained on the entire internet — they have opinions about everything, even things they shouldn't.
NotebookLM's error rate? About 13%. And here's the critical difference — that 13% isn't fabrication. It's interpretive overconfidence. It might summarize a clause slightly too broadly. But it does NOT invent clauses that don't exist.
And when NotebookLM doesn't know? It actually says: "I don't have that in your sources." That's huge. A cautious advisor who admits uncertainty is infinitely more valuable than a confident liar.
Every answer comes with inline source citations — it tells you exactly which document and which passage it's referencing. You can verify in seconds.
So how does this actually work? For the IT officers in the room, this is the architecture slide.
NotebookLM uses something called RAG — Retrieval-Augmented Generation. Here's the simple version:
1. You upload documents — PDFs, Google Docs, Slides, even YouTube videos
2. The system chunks them — breaks them into searchable passages
3. Creates embeddings — mathematical representations of each passage
4. When you ask a question — it searches for the most relevant passages using cosine similarity
5. Generates an answer — but ONLY from those retrieved passages
Think of it this way: it indexes your library, then only answers from those shelves. There is no internet access. If you didn't upload it, the AI doesn't know it.
One important caveat I need to flag for you: Google recently added a 'Deep Research' mode to NotebookLM that DOES browse the web. For your use case, keep modes separate. Use the standard notebook for confidential document work. Deep Research is a different tool for a different purpose.
Capacity: You can upload up to 500,000 words or 200MB per source. Free tier gives you 50 sources per notebook. Enterprise tier gives you 600, plus VPC Service Controls, IAM integration, and audit trails — which I know matters to the government folks here.
Let's talk about what goes in. NotebookLM accepts PDFs, Google Docs, Google Slides, plain text, Markdown, web URLs, YouTube videos, and audio files. The Enterprise tier adds Microsoft Office formats — Word, Excel, PowerPoint.
Each source can be up to 500,000 words or 200 megabytes. On the free tier, you get 50 sources per notebook. Google's Plus tier bumps that to 300. Enterprise gives you 600.
And here's what matters for this room: the Enterprise tier is now a Google Workspace core service. That means it inherits your existing security posture — VPC Service Controls compliance, IAM access controls, full audit trails, and admin console management. If your agency already runs Google Workspace, NotebookLM Enterprise slots right into your existing governance framework.
Let me give you concrete use cases tailored to who's in this room.
For government agencies:
• Policy Q&A: Your staff asks plain-language questions about official policies — gets cited answers, not guesses. "What's the procurement threshold for direct purchase?" — answer with exact policy reference.
• Compliance gap analysis: Upload the regulation AND your internal policy side by side. Ask: "Where do we fall short?" NotebookLM compares them instantly.
• Contract review: Upload a 50-page vendor contract. Ask: "What are all the penalty clauses?" It extracts every one with page references.
• Legislative research: Upload multiple bills, ask questions that compare them across documents.
The state of Georgia actually published official guidance on using NotebookLM for government adoption — so there's precedent.
For hospitality and corporate operations:
• SOP Q&A: "What's the checkout procedure for VIP guests?" — cited, step-by-step from your actual manual.
• Audio training in 76 languages: Convert your operations manual into a podcast. Your multilingual staff can listen during their commute.
• New hire onboarding: Upload the employee handbook. New hires get a 24/7 training assistant from day one.
The common thread: the AI only knows what you tell it, and it proves where every answer comes from.
For hospitality and corporate operations, the use cases are just as powerful. Upload your standard operating procedures and let staff ask "What's the checkout procedure?" or "What's the pool chemical protocol?" — they get step-by-step answers with source citations.
Health and safety regulations become interactive compliance checklists.
Here's where it gets really interesting for multilingual teams: the Audio Overview feature generates podcast-style training modules in 76 languages. A 50-page operations manual becomes a 15-minute audio briefing your housekeeping or F&B staff can listen to on shift — in their native language.
New hires upload the employee handbook and have a 24/7 training assistant from day one. Front-of-house staff can query menu and allergen information instantly. And for event planning, upload your SOPs, vendor contracts, and setup guides — then query across all of them simultaneously.
I know the question on every IT officer's mind: where does the data go?
Here's Google's verified policy:
• Google does NOT use your uploaded data to train AI models
• On Workspace Business and Enterprise plans: zero human review of your content
• Enterprise tier adds: VPC Service Controls compliance, full IAM controls, audit trails, AD/Okta integration
Important caveats I want to be transparent about:
• This IS cloud-based — your documents are on Google's servers. Verify data residency requirements for your organization.
• Personal Gmail accounts do NOT have the same protections
• For truly sensitive data where nothing can leave your building — that's Session 3 this afternoon, where I'll show you AI that runs entirely on a laptop with no internet.
NotebookLM is the middle ground: much safer than ChatGPT, not as private as local AI. For most document work, it's the sweet spot.
Alright, enough talking. Let me show you.
(Open NotebookLM in browser)
I'm creating a new notebook right now. I'm going to upload two documents: 1. A 50-page mock procurement contract — with penalty clauses, SLA terms, and deliverable timelines 2. A 100-page HR policy handbook — covering leave policies, grievance procedures, and code of conduct
(Upload documents, wait for processing — ~30 seconds)
Now watch this. I'm going to ask it: "What are all the penalty clauses in the procurement contract, and what are the monetary thresholds?"
(Show the answer with inline citations)
See those blue citations? Each one links back to the exact page and passage. You can click and verify. This isn't the AI guessing — it's showing its homework.
Now let me try the HR policy: "What is the procedure for filing a workplace grievance, step by step?"
(Show answer)
Every step cited. Now here's the fun part...
(Click Audio Overview)
I'm going to turn this entire HR policy into a 10-minute podcast. Two AI hosts will discuss the key points in a conversational format. This takes about 2-3 minutes to generate.
(While it generates) Imagine sending this to your team. Instead of asking everyone to read 100 pages — they listen to a 10-minute podcast on their commute. In any of 76 languages.
(Play a segment of the audio when ready)
That's a 100-page policy, turned into something your team will actually consume.
Your turn. Challenge it. Ask me any question about these documents. Try trick questions — ask about something that ISN'T in the documents and see if it makes things up.
(Take 3-4 questions from audience, type them live)
(If someone asks something not in the docs): See that? It said "I don't have information about that in your sources." That's exactly what you want. An AI that admits when it doesn't know.
Before we wrap this session, let me be honest about the limitations:
• No offline mode — you need internet
• Notebooks are isolated — you can't cross-reference between notebooks
• Free tier caps at 50 sources
• No formatted citations — no APA or MLA output
• No real-time collaboration on the same notebook
• Safety flags may trigger on sensitive government topics — you might need to rephrase
But with 80,000+ organizations already using this tool, including government agencies, the momentum is real.
My recommendation for your organization: Start with one use case. Pick your most-asked-about document — maybe your procurement manual or HR handbook — upload it, and let your team try it for two weeks. You'll know within a week if it's transformative.
Questions before we move on?
Before we wrap up — if what you've seen today resonated, FrontierAI can help your organization deploy these systems.
We build custom AI solutions: agentic workflows, document intelligence pipelines, and private AI infrastructure.
We're currently offering a limited promo for our first 10 enterprise customers — $1,800, down from our standard $9,000.
Why the discount? We're actively developing a product called ChalkAgents — an Agent Management Platform for Scaled AI Product Development — and we're offering this rate while we stress-test and refine our platform with real enterprise clients.
It's a genuine win-win: you get enterprise AI deployment at a fraction of the cost, and we get real-world validation for our tooling.
Thank you for your time today. If you want to discuss how any of these tools apply to your organization, or if you're interested in working with FrontierAI, scan this QR code to connect with me on LinkedIn.
I'm always happy to chat about enterprise AI strategy. Let's stay in touch.