Day 1 — Technical Foundations, Strategy and Prompting
March 2, 2026
By the end of the day you will be able to:
| Time | Part | Session | Topics |
|---|---|---|---|
| 10:30 – 11:00 | Morning | Course Introduction |
|
| 11:00 – 11:30 | Morning | First Part |
|
| 11:30 – 11:45 | ☕ Break | ||
| 11:45 – 13:00 | Morning | Second Part |
|
| 13:00 – 14:00 | 🍽 Lunch Break | ||
| 14:00 – 14:55 | Afternoon | First Part |
|
| 14:55 – 15:05 | ☕ Break | ||
| 15:05 – 15:50 | Afternoon | Second Part |
|
| 15:50 – 16:00 | Closing | Wrap-up |
|
Definitions · State of the Art · Common Myths · The Paradigm Shift
Adoption is still uneven: large OTAs are years ahead of SMEs
| Myth | Reality |
|---|---|
| "AI is always accurate" | It produces convincing errors (hallucinations) |
| "It understands everything like a human" | It predicts the next token — it doesn't reason |
| "It is autonomous and unpredictable" | It follows statistical distributions over data |
| "It will replace everyone" | It automates repetitive tasks, not entire roles |
| "It is too complex for SMEs" | APIs and no-code tools lower the barrier to entry |
How large language models work
A Large Language Model is a neural network trained to predict the next token in a sequence of text.
Before processing text, the LLM converts it into tokens (word fragments).
Interactive tool: platform.openai.com/tokenizer
| Parameter | Effect | Example use |
|---|---|---|
temperature |
0 = deterministic, 1 = creative, >1 = chaotic | 0.2 for FAQs, 0.9 for copywriting |
top_p |
Truncates to top P% of cumulative probability | 0.9 safe default |
max_tokens |
Limit of generated tokens | 150 for tweets, 2000 for articles |
stop |
String that stops generation | "\n\n" for short responses |
The model's working memory: limits and practical implications
The context window is the maximum amount of text the model can "see" in a single processing moment.
| Model | Context window | ≈ English words |
|---|---|---|
| GPT-4o | 128,000 tokens | ~96,000 |
| Claude 3.5 Sonnet | 200,000 tokens | ~150,000 |
| Gemini 2 Pro | 2,000,000 tokens | ~1,500,000 |
| GPT-4o mini | 128,000 tokens | ~96,000 |
| Scenario | Estimated tokens | Strategy |
|---|---|---|
| Booking chatbot (short chat) | 1,000–5,000 | No problem, mini model |
| Analysis of 50 reviews | ~20,000 | Batch, one analysis at a time |
| Q&A on full catalogue (500 products) | >100,000 | RAG required |
| Single reply to a guest | 200–800 | Direct API call, inexpensive |
Training a model on your own data
| Phase | Data | Who does it | Cost |
|---|---|---|---|
| Pre-training | Trillions of tokens from the internet | Labs only (OpenAI, Anthropic…) | $10M–$100M+ |
| Supervised FT | A few hundred to thousands of examples | Companies, developers | $100–$10,000 |
| RLHF / RLAIF | Human or AI preferences | Mainly the labs | High |
| Prompting | Examples in the text | Anyone | API call cost |
Prompt: "You are the Hotel Bellavista assistant.
Reply in a warm and professional manner.
Here are our policies: [500 tokens]
Here are our services: [800 tokens]
Guest question: late check-out?"
~1,400 tokens of context on every call
Prompt: "Guest question:
late check-out?"
Tone, policies and style already baked into the model
~30 tokens of context on every call
| Criterion | Use prompting | Consider fine-tuning |
|---|---|---|
| Time available | Available immediately | Weeks for data + training |
| Request volume | Low / medium | Very high (context cost) |
| Output style | Generic or semi-specific | Very rigid or branded |
| Available data | Few or none | Tens to hundreds of examples |
| Technical skills | None required | Developer or ML engineer |
Connecting an LLM to your data without training it
An embedding model converts text into a vector of numbers that represents its meaning.
Searches for exact words
SELECT * FROM faq
WHERE text LIKE '%check-out%'
Does not find "departure time"
Searches by semantic meaning
query = embed("what time do I need to leave the room?")
results = db.search(query, top_k=3)
Finds check-out, departure, late check-out…
Common vector database solutions: Pinecone, Weaviate, Chroma, pgvector (PostgreSQL), Qdrant
| Aspect | RAG | Fine-tuning |
|---|---|---|
| Data updates | Easy (reload documents) | Expensive (retrain) |
| Transparency | High (source is visible) | Low (data baked in) |
| Factual hallucinations | Reduced (chunks as anchor) | Present |
| Output style | Does not change model behaviour | Modifies style and behaviour |
| Technical setup | Medium (vector DB, pipeline) | Medium-high (dataset, GPU) |
Autonomous systems that plan and act in the real world
Receives request, checks PMS availability, applies rate policies, generates confirmation and sends email to the guest
Analyses history, local events and competitors → automatically proposes the weekly rate plan to the revenue manager
Scans OTAs every hour, compares rates with your own, notifies the team if the gap exceeds a defined threshold
Monitors TripAdvisor, Google, Booking in real time, drafts personalised replies and sends them after human approval
Plans tailor-made itineraries, books restaurants and activities, answers pre- and post-stay questions in natural language
Aggregates data from PMS, CRM and OTA channels every morning → generates KPI report with highlighted anomalies, ready for the briefing
| Scenario | Reliability | Notes |
|---|---|---|
| Simple tasks with clear tools | High | 1-3 steps, well-defined tools |
| Information search and synthesis | High | No irreversible actions |
| Multi-step workflow (5-10 steps) | Medium | Requires supervision |
| Open-ended tasks, ambiguous goal | Low | Not recommended in production |
| Autonomous financial actions | Not recommended | Human supervision always |
How an LLM executes actions in the real world
An LLM on its own only produces text. To do something, it needs tools.
Each tool is defined with a schema that the model uses to generate correct calls.
{
"name": "check_availability",
"description": "Checks room availability in the PMS",
"parameters": {
"room_type": {
"type": "string",
"enum": ["standard", "deluxe", "suite"],
"description": "Requested room type"
},
"arrival_date": {
"type": "string",
"format": "date",
"description": "Arrival date (YYYY-MM-DD)"
},
"nights": {
"type": "integer",
"description": "Number of nights"
}
},
"required": ["room_type", "arrival_date", "nights"]
}
| Tool | Action | Integrated system |
|---|---|---|
check_availability |
Check available rooms | PMS (Opera, Mews, Cloudbeds) |
create_booking |
Book and confirm | PMS + Channel Manager |
get_weather |
Weather forecast for destination | Weather API |
send_notification |
Email/SMS to the guest | CRM, Mailchimp, Twilio |
update_rate |
Update prices | Revenue management system |
MCP is an open standard (Anthropic, 2024) for connecting LLMs to tools and data sources in an interoperable way.
# The model generates this code:
results = web_search("AI news")
content_list = []
for r in results:
content = web_fetch(r.url)
content_list.append(content)
write_blog(content_list)
1 round-trip · only the final output re-enters the LLM context
tools = [
# 1. Add the execution environment
{"type": "code_execution_20250522",
"name": "code_execution"},
# 2. Mark tools callable from code
{
"name": "query_database",
"description": "...",
"input_schema": {...},
"allowed": ["code_execution"]
}
]
The code_execution tool provides the sandbox; allowed exposes the function as callable in the environment.
for loops over N elements, executed in parallelif/else on the result of a tool to decide the next toolweb_search tool — filtering is applied automaticallysearch_tool searches for relevant definitions on demand, via regex or BM25 queryResuming in the afternoon
ChatGPT, Claude, Copilot and Gemini — access, costs and first hands-on activities
| Tool | Company | Key strength | Free? |
|---|---|---|---|
| ChatGPT | OpenAI | General purpose, the best-known; images (DALL-E 3) | Yes (limited) |
| Claude.ai | Anthropic | Long-form writing, PDF document analysis | Yes (limited) |
| Microsoft Copilot | Microsoft | Integrated in Outlook, Teams, Excel, Word | Yes (web) |
| Google Gemini | Integrated in Gmail, Docs, Drive, Sheets | Yes (basic) |
The Projects feature (Pro plan) saves a fixed context — policy, tone, instructions — without having to rewrite it for every conversation.
If you already use Microsoft 365, check with your IT provider whether the M365 Copilot plan can be activated on your tenant.
Those already using Google Workspace get the most value from the paid plan, thanks to native integrations with Gmail and Drive.
| If… | Use | Plan |
|---|---|---|
| You already use Microsoft 365 (Outlook, Teams, Excel) | Microsoft Copilot M365 | $30/user/month |
| You already use Google Workspace (Gmail, Drive) | Google Gemini | €21.99/month |
| You want to analyse documents and write articulate texts | Claude.ai | Free to start, Pro ($18/month) for daily use |
| You want the most versatile option, with image generation | ChatGPT | Free to start, Plus ($20/month) for daily use |
Where AI creates value in your tourism business
The starting point for identifying AI opportunities is a map of repetitive business processes.
| Suitable | Not suitable (or not yet) |
|---|---|
| Responding to template emails with personalisation | Managing complex disputes with guests |
| Generating room / package descriptions | Legal decisions (refunds, damages) |
| Summarising feedback and reviews | Physical operational supervision |
| Proposing personalised itineraries | Relationships with high-value VIP clients |
| Translating multilingual communications | HR performance evaluations of staff |
| Task | Frequency | Impact | Estimated effort |
|---|---|---|---|
| 1. | |||
| 2. | |||
| 3. |
Communicating with AI models precisely and systematically
No examples, instruction only
"Classify this review as positive or negative: 'The breakfast was cold'"
Fast, cheap, sufficient for simple tasks
2–5 examples before the task
"Positive: 'Breathtaking view' Negative: 'Small room' Classify: 'Cold breakfast'"
Improves precision and output format
Ask for explicit reasoning
"Reason step by step: 1) Identify the problem 2) Assess the tone 3) Give the classification"
Complex tasks, reasoning, calculations
A template is a prompt with variables to be filled in at runtime.
You are the guest relations manager of {{property_name}}.
You have received this review on {{platform}}:
---
{{review_text}}
---
Write a professional response that:
- Thanks the guest for their feedback
- Addresses the main point specifically
- {{specific_instruction}}
- Stays under {{max_words}} words
"You are the social media manager of Agriturismo Le Colline. Write 3 Instagram post variants to promote the wine-tasting weekend on 15 March. Format: - Emoji at the start - Max 150 characters - 3–5 relevant hashtags - Warm, authentic tone"
🍷 Saturday, 15 March, join us among our vineyards for an unforgettable wine tasting. #agriturismo #wine #tuscany 🌿 A weekend of flavour and nature: tasting with a view over the hills. Limited spots! #wine #tuscany #weekend ✨ Every sip tells the story of our land. Join us on 15 March. #winetasting #wine #italianwine
"Reply to this negative review. Tone: professional and understanding. Acknowledge the specific issues. Offer a direct point of contact. Max 100 words. Do not be defensive."
You are an expert in eastern Sicily tourism.
Create a 5-day itinerary for:
- Couple, aged 35–40
- Interests: Greco-Roman history, local cuisine, moderate walks
- Budget: mid-to-high
- Period: April
- Base: Syracuse
- Self-guided: independent travel by car
For each day indicate:
- Morning / afternoon / evening
- Estimated driving time
- 1 recommended restaurant with a typical speciality
- 1 practical note (booking recommended, opening hours, etc.)
"Given this cancellation policy: [policy text] Answer this guest question: 'If I cancel 10 days in advance do I get a full refund?' Be precise, brief, and friendly in tone."
"Write a commercial description for this package: - 3 nights sea-view suite - Breakfast included - Unlimited SPA access - Romantic dinner (1 evening) - Price: €480 per night Target: couples, special occasions. Max 80 words. Evocative tone."
| Error | Effect | Solution |
|---|---|---|
| Prompt too vague | Generic, unusable output | Specify format, length, tone |
| No business context | Generic, non-branded response | Add name, sector, values |
| Only saying what NOT to do | The model doesn't know what to do | Say WHAT you want, not just what to avoid |
| Multiple tasks in one prompt | Mediocre result on everything | One prompt = one task |
| No output structure | Unpredictable format | Specify: "reply as a list", "use JSON" |
Analyse the following hotel reviews and produce a structured report.
Reviews:
---
{{reviews_list}}
---
For each review, identify:
1. Overall sentiment: Positive / Neutral / Negative
2. Main themes mentioned (max 3 per review)
3. Implicit score 1–5
Then produce:
- Top 3 recurring strengths
- Top 3 areas for improvement
- 1 concrete recommended action
Output format: structured JSON
Optimising the text of a single prompt: role, instruction, format, constraints, few-shot examples.
Designing the entire context passed to the model:
GDPR, practical risks and responsible AI use
| Risk | Description | Impact |
|---|---|---|
| Prompt injection | A malicious input alters the model's behaviour | Exposed data, unauthorised actions |
| Data leakage | The model "remembers" sensitive data inserted in the context | Exposure of client/business data |
| Model inversion | Attacks that extract data from model parameters | Relevant only for fine-tuning on sensitive data |
| Hallucinations in factual data | The model invents prices, policies, legal information | Reputational damage, disputes |
For batch processing of client data: always use pseudonymisation and keep the mapping locally.
Recap · Resources · Next steps
Thank you for today.
See you on Day 2.
Abbreviations and terms used during the day
| OTA | Online Travel Agency — online booking platforms (Booking.com, Expedia, Airbnb) |
| PMS | Property Management System — hotel management software for bookings, rooms and billing |
| CRM | Customer Relationship Management — system for managing client data and relationships |
| KPI | Key Performance Indicator — key metric for measuring business performance |
| FAQ | Frequently Asked Questions — common questions with pre-written answers |
| DMC | Destination Management Company — local operator organising on-site tourism services |
| B2B / B2C | Business-to-Business / Business-to-Consumer — type of commercial relationship |
| LLM | Large Language Model — AI model trained on large amounts of text |
| ML | Machine Learning — AI discipline that learns from data without explicit rules |
| RAG | Retrieval Augmented Generation — technique that connects an LLM to external documents |
| API | Application Programming Interface — interface for communication between software systems |
| MCP | Model Context Protocol — Anthropic standard for connecting AI agents to external tools |
| GPU | Graphics Processing Unit — specialised processor used for training AI models |
| GDPR | General Data Protection Regulation — EU regulation on personal data protection |
| DPA | Data Processing Agreement — contractual agreement on data processing with third-party providers |
| SOC 2 | Service Organization Control 2 — security certification for cloud service providers |