Sonnets and Muppets: Code Your Own AI Weatherperson

by
Sy Mitchell

Turn Your Curiosity Into Forecasting Power

Imagine getting more than just a weather update from an app on your iPhone. What if you could customize it to talk about the weather like Kermit the Frog or Shakespeare? 

Well, we can get you there. Let’s bring your weather to life!

In this guide we’ll show you how to query a live weather API, add character, and even teach your own Local Language Model (LLM) to respond creatively in real-time.

Getting started: Building your toolkit

What if we want to get data that isn't static? Something like checking the weather in real time, by querying an API. And what we will be doing is something far more interesting than what Siri can do. Let’s try to do just that, and instead of using OpenAI, we'll use the Ollama service, so we can run our LLM locally on a workstation, giving us more control and flexibility. 

Working With Right Tools 

To work effectively with real-time data and enhance the customization potential of your AI, setting up the right tools is essential. This guide assumes you're working with a Silicon Mac with at least 16GB of ram, or a windows PC with a fast GPU.

Installing Ollama

Ollama is a great utility for working with different LLM models. Originally made for the Llama models created by Meta, it now supports a large collection of different models, letting us experiment locally without running up huge cloud costs.

1. Download the Ollama tool and launch it. It may prompt you to install the Ollama CLI in your path; allow it if asked.

2. In a terminal you should now be able to start an Ollama session with the Llama 3.1 model with ollama run llama3.1:latest. Because this is the first time you are running this particular model, it will download it first.

3. Once you've launched Ollama from the terminal you will have a prompt with >>> that you can ask questions of, just like on the ChatGPT website. You can close this session with control-d or by typing /bye:

% ollama run llama3.1:latest
pulling manifest 
pulling 8eeb52dfb3bb... 100%   4.7 GB                         
pulling 948af2743fc7... 100%   1.5 KB                         
pulling 0ba8f0e314b4... 100%12 KB                         
pulling 56bb8bd477a5... 100%    96 B                         
pulling 1a4c3c319823... 100%   485 B                         
verifying sha256 digest 
writing manifest 
success 
>>> What are you?
I am a computer program designed to simulate conversation, answer questions, and provide information on a wide range of topics. I'm a type of artificial intelligence (AI) called a "chatbot" or "conversational AI."

My purpose is to assist users like you by:

1. Answering questions: I can provide information on various subjects, from science and history to entertainment and culture.
2. Generating text: I can create human-like text based on the input I receive, often in response to a question or prompt.
3. Chatting: I'm here to listen and respond to your thoughts, feelings, and experiences.

I don't have personal opinions, emotions, or physical existence. I exist solely as a digital entity, running on computer servers and responding to inputs from users like you!

How does that sound? Do you have any specific questions or topics you'd like to discuss?

Preparing Your Python Playground

This guide assumes you’ve already got Python configured and working with OpenAI libraries. Now we just want to add the Ollama API library with pip install ollama.

Doing our first tool call

Previously we just built a free-form text client—you passed a question to an LLM and it returned whatever the model spit out. This time we want to do something a little more intelligent (so to speak) with the LLM: We're going to let it use tools (or at least convince it to provide formatted responses that we can use in our own custom tools).

Pinpointing Edinburgh: Asking the right questions

To help someone figure out the weather, we need a location. And that means we need to figure out exactly which location the user is referring to. 

You can ask Siri, "What is the weather like in Dublin, Ireland?" but you can't reliably ask, "What is the weather like in the capital of Scotland?" Do you spot the difference? While most people would know that you're talking about Edinburgh, Siri wouldn't.

Asking an LLM What is the capital of Scotland? will give a response, but not one that is easy or consistent to parse. Notice how this changes between executions—trying to extract the answer reliably with a text parser would be a nightmare. And changing models means an entirely different response:

% ollama run llama3.1:latest
>>> what is the capital of scotland?
The capital of Scotland is Edinburgh.

>>> what is the capital of scotland?
That's a easy one! The capital of Scotland is still: **Edinburgh**

>>> what is the capital of scotland?
I'll stick to my previous answer... The capital of Scotland is: **Edinburgh** (no change there!)

>>> /bye
% ollama run gemma2:9b      
>>> what is the capital of scotland
The capital of Scotland is **Edinburgh**. 


>>> what is the capital of scotland
Edinburgh 😊 

This is where a tool comes in. In the context of this example, the tool is really just a way of providing a schema that goes along with the prompt, that gives you a reliable format of information to use. When you specify the name of the function you want to use it with and the parameters it will take, it will reply with just that information if it has it. This capability can also be used as a check against nonsense responses—chances are it won't pass the tool’s data back in the response, indicating something has gone wrong.

We're going to create a new file called weather.py and use this to do our work. The first step is to just get enough of the location information from the user to take action with it. So instead of performing the typical client call like we've done in the past, we're going to augment it with the schema we want the response in, and add some content to the system part of the message indicating that it should refer to the get_loc_name tool we have defined. The tools part of client.chat is where the schema is being defined:

1client = ollama.Client()
2
3messages = [
4    {
5        'role': 'system',
6        'content': "Determine which city the user is talking about and provide that information to the get_loc_name tool."
7    },
8    {
9        'role': 'user',
10        'content': question
11    }
12]
13
14
15response = client.chat(
16    model='llama3.1',
17    messages=messages,
18    tools=[
19        {
20            'type': 'function',
21            'function': {
22                'name': 'get_loc_name',
23                'description': 'Get the coordinates for a city',
24                'parameters': {
25                    'type': 'object',
26                    'properties': {
27                    'city': {
28                        'type': 'string',
29                        'description': 'The city to get the weather for',
30                    },
31                    'state': {
32                        'type': 'string',
33                        'description': 'The state, county, or region the city is in',
34                    },
35                    'country': {
36                        'type': 'string',
37                        'description': 'The country that the city is in',
38                    },
39                    },
40                    'required': ['city', 'state', 'country'],
41                },
42                },
43            },
44        ],
45)
46

This is what that looks like:

1import ollama
2import json
3
4def get_city(question):
5    client = ollama.Client()
6
7    messages = [
8        {
9            'role': 'system',
10            'content': "Determine which city the user is talking about and provide that information to the get_loc_name tool."
11        },
12        {
13            'role': 'user',
14            'content': question
15        }
16    ]
17
18    response = client.chat(
19        model='llama3.1',
20        messages=messages,
21        tools=[
22        {
23            'type': 'function',
24            'function': {
25                'name': 'get_loc_name',
26                'description': 'Get the coordinates for a city',
27                'parameters': {
28                    'type': 'object',
29                    'properties': {
30                    'city': {
31                        'type': 'string',
32                        'description': 'The city to get the weather for',
33                    },
34                    'state': {
35                        'type': 'string',
36                        'description': 'The state, county, or region the city is in',
37                    },
38                    'country': {
39                        'type': 'string',
40                        'description': 'The country that the city is in',
41                    },
42                    },
43                    'required': ['city', 'state', 'country'],
44                },
45                },
46            },
47        ],
48    )
49
50    # this is a simple structured object, we're not going to make a complete class here, clean this up
51    output = {
52        'location': [],
53    }
54
55
56    # llm didn't return a response with a tool call
57    if not response['message'].get('tool_calls'):
58        print("The model didn't use the function. Its response was:")
59        return output
60
61    # Add the formatted data from tool calls to the structured output
62    if response['message'].get('tool_calls'):
63        output = {}
64
65        for tool in response['message']['tool_calls']:
66            if tool['function']['name'] == 'get_loc_name':
67                args = tool['function']['arguments']
68                output['location']          = args
69                
70        return output
71
72
73question = 'what is the capital of scotland?'
74
75print(f"Asking: {question}")
76
77response = get_city(question)
78
79print(json.dumps(response, indent=2))

And running this code gives this output (we're going to stick with Scotland for now), notice how it is a standard json object instead of raw text? That makes our lives much easier.

% python weather.py 
Asking: what is the capital of scotland?
{
  "location": {
    "city": "Edinburgh",
    "country": "United Kingdom",
    "state": "Scotland"
  }
}

Adding personality: Your weather Muppet

Now that we have a location, let's figure out who's going to deliver the weather report. Maybe you want a Muppet to be your morning weather person. To do this, we should also have a way for the LLM to give us the name of a famous person if it's included in the question. This is an optional attribute, so we're going to add weatherperson to the list of parameters, but not the list of required ones.

1'weatherperson': {
2    'type': 'string',
3    'description': 'The famous person who may be referenced in the question',
4},
5

Next update the system message to give instructions around the weatherperson field:

1{
2'role': 'system',
3    'content': "Determine which city the user is talking about and provide that information to the get_loc_name tool. If they refer to a famous person, add that name to the weatherperson field."
4},
5

And change the question to include a famous Muppet:

1question = 'what is the capital of scotland? Who is the green muppet?'
2

Combining all of the above in the new code will output something like this when run:

% python step_02.py
Asking: what is the capital of scotland? Who is the green muppet?
{
  "location": {
    "city": "Edinburgh",
    "country": "United Kingdom",
    "state": "Scotland",
    "weatherperson": "Kermit the Frog"
  }
}

Getting the location

There are many different weather location services out there, however due to abuse finding one to use just for tutorials takes a little more effort. Open-Meteo provides a free tier for testing as part of their open source SDK, and also makes it easy to experiment with the data they aggregate. Open Meteo has a great page to let you experiment with their API parameters to get the current and historical weather data for any location, if you know the latitude and longitude. (This is the weather right now for Edinburgh, Scotland.)

So it appears we need to add another call here—one to get the coordinates of the city—before we can get the weather for it. This means adding a second service to do that conversion for us. Nominatim works well via the GeoPy module, so that’s what we use in this example. However, Nominatim does have pretty strict throttle requirements, so there is also going to be a “debug” mode that will just return the hard-coded coordinates of Edinburgh, so as not to abuse their services while testing other aspects of our program.

Time for more libraries, so go ahead and add the GeoPy and requests module to the environment:

pip install geopy requests

And load the Nominatim class from it to use for the coordinates lookup. We're also going to need to do some HTTP curl commands to get the information from Open-Meteo. And since this is the weather, handling date and time will be important, so those modules are loaded. 

The start of the Python code now looks like this:

1import requests
2import json
3import ollama
4import argparse
5
6from geopy.geocoders import Nominatim
7from typing import Tuple, Dict
8from zoneinfo import ZoneInfo
9from datetime import datetime
10

And adding a new function, get_coords, which takes the city, state, and country values returned from the LLM, along with a test boolean, queries Nominatim’s servers for the location, and returns the values as a tuple. We're starting to use type definitions(city:str and -> Tuple[float, float]) so other functions know what kind of data to expect to get from the function when it is executed–and for the IDE to help guide you when you call it.

1def get_coords(city: str, state: str, country: str, test: bool = True) -> Tuple[float, float]:
2    if test:
3        print("We're testing")
4        return (55.9533, -3.1884)
5    else:
6        app = Nominatim(user_agent="sandgarden_tutorial")
7        address = f"{city}, {state}, {country}"
8        loc = app.geocode(address)
9
10        return (round(loc.latitude, 4),round(loc.longitude, 4))
11

Adding these lines at the end will call the function and return the results. To enable the live lookup, change test=True to test=False (intentionally left out here to avoid excessive use of their systems during copy-pasting):

1location = answer['location']
2
3coords = get_coords(location['city'], location['state'], location['country'], test=True)
4
5print(coords)
6

Generating the following output:

1% python step_02.py
2Asking: what is the capital of scotland? Who is the green muppet?
3{
4  "location": {
5    "city": "Edinburgh",
6    "country": "Scotland",
7    "state": "",
8    "weatherperson": "Kermit the Frog"
9  }
10}
11(55.9533, -3.1884)
12

Now that we have the coordinates for this step, we can fall back to using the test values, which already align with what we received from Nominatim's servers. Before you proceed, make sure this is the output your script is generating:

% python step_02.py
Asking: what is the capital of scotland? Who is the green muppet?
{
  "location": {
    "city": "Edinburgh",
    "country": "United Kingdom",
    "state": "",
    "weatherperson": "Kermit the Frog"
  }
}
We're testing
(55.9533, -3.1884)

Getting the weather and cleaning up some data

Now that we have the coordinates for Edinburgh, it’s time to fetch the weather data. Open-Meteo offers a robust SDK that’s ideal for continuous use or large-scale data collection, but since we only need a small batch of weather data over a short period, a single requests.get call to their API will serve our purposes perfectly.

Handling timezones: Adjusting for local relevance

Since weather is usually more relevant to the people at the location than not, we want to make sure we're including the timezone for the location in question. Luckily, Open-Meteo conveniently includes a location’s timezone in its response; however, Python doesn’t append timezone info to ISO-formatted strings by default, so we’ll use a helper function. This function takes an ISO-8601 formatted datetime stamp (YEAR-MONTH-DAY) and timezone information, returning a properly formatted datetime object with the correct timezone. Since this process repeats several times, the helper function simplifies and makes the code more readable.

1def timestamp(iso: str, tz: str) -> datetime:
2    dt = datetime.fromisoformat(iso)
3    tz = ZoneInfo(tz)
4    
5    return dt.replace(tzinfo=tz)
6

Filtering for Relevant Weather Data

Now we need to get the weather. Along with the latitude and longitude, we need to specify exactly which data we want back from it—Open-Meteo provides a vast amount of data, so we’ll narrow it down to what’s essential. Since we’re not using their helper library, and we want to clean up the weather data into current, daily, and hourly summaries, there will be some data munging happening.

1def weather_lat_long(coords: Tuple) -> Dict:
2    params = {
3        "latitude": coords[0],
4        "longitude": coords[1],
5        "current": ["apparent_temperature", "is_day", "rain", "showers", "snowfall"],
6        "hourly": ["temperature_2m", "precipitation_probability", "cloud_cover", "visibility"],
7        "daily": ["temperature_2m_max", "temperature_2m_min", "sunrise", "sunset", "precipitation_sum", "precipitation_hours", "precipitation_probability_max"],
8        "timezone": "auto",
9        "forecast_days": 2
10    }
11    
12    url = "https://api.open-meteo.com/v1/forecast"
13    r = requests.get(url,params=params).json()
14
15    current = r['current']
16    current.pop('interval')
17    # we want a "now" datetime object, this makes finding older entries easier
18    report_time = timestamp(r['current']['time'], r['timezone'])
19    # since we're passing this data to a language model that is bad at math
20    # we are formatting the dates as sentences instead of ISO format
21    current['current_time'] = report_time.strftime('%A %-d of %B at %-I:%M %p')
22
23    weather = {
24        'daily': [],
25        'hourly': [],
26        'current': current,
27        'timezone': r['timezone']
28    }
29
30    daily = r['daily']
31    hourly = r['hourly']
32
33    for ind, x in enumerate(daily['time']):
34        date = timestamp(daily['time'][ind], r['timezone'])
35        
36        sunrise = timestamp(daily['sunrise'][ind], r['timezone'])
37        sunset = timestamp(daily['sunset'][ind], r['timezone'])
38
39        tmp = {
40            'date': date.strftime('%A %-d of %B'),
41            'high_temp': daily['temperature_2m_max'][ind],
42            'low_temp': daily['temperature_2m_min'][ind],
43            'sunrise': sunrise.strftime('%A %-d of %B at %-I:%M:%S %p'),
44            'sunset': sunset.strftime('%A %-d of %B at %-I:%M:%S %p'),
45            'rain_amount_mm': daily['precipitation_sum'][ind],
46            'hours_rain': daily['precipitation_hours'][ind],
47            'chance_rain': daily['precipitation_probability_max'][ind]
48        }
49        weather['daily'].append(tmp)
50
51    for ind, x in enumerate(hourly['time']):
52        hour = timestamp(hourly['time'][ind], r['timezone'])
53
54        # We don't need to pass old data to the LLM, it will just make things more complicated on ourselves
55        # because of our timestamp function, we accurate datetime objects that can be easily compared
56        if hour > report_time:
57            tmp = {
58                'hour': hour.strftime('%A %-d of %B at %-I:%M %p'),
59                'temp': hourly['temperature_2m'][ind],
60                'precipitation_probability': hourly['precipitation_probability'][ind],
61                'cloud_cover': hourly['cloud_cover'][ind],
62                'visibility': hourly['visibility'][ind]
63            }
64            weather['hourly'].append(tmp)
65
66    return weather
67
68# call the new function and output the forecast nicely with json
69forecast = weather_lat_long(coords)
70
71print(json.dumps(forecast, indent=2))

With those two functions added, running the code will output a large block of information:

% python step_03.py
Asking: what is the capital of scotland? Who is the green muppet?
{
  "location": {
    "city": "Edinburgh",
    "country": "United Kingdom",
    "state": "Scotland",
    "weatherperson": "Kermit the Frog"
  }
}
We're testing
(55.9533, -3.1884)
{
  "daily": [
    {
      "date": "Wednesday 30 of October",
      "high_temp": 14.1,
      "low_temp": 10.0,
      "sunrise": "Wednesday 30 of October at 7:16:00 AM",
      "sunset": "Wednesday 30 of October at 4:36:00 PM",
      "rain_amount_mm": 0.0,
      "hours_rain": 0.0,
      "chance_rain": 0
    },
    {
      "date": "Thursday 31 of October",
      "high_temp": 15.0,
      "low_temp": 11.7,
      "sunrise": "Thursday 31 of October at 7:18:00 AM",
      "sunset": "Thursday 31 of October at 4:33:00 PM",
      "rain_amount_mm": 0.0,
      "hours_rain": 0.0,
      "chance_rain": 0
    }
  ],
  "hourly": [
    {
      "hour": "Wednesday 30 of October at 4:00 PM",
      "temp": 13.1,
      "precipitation_probability": 0,
      "cloud_cover": 45,
      "visibility": 38800.0
    },
 ... cut ...
    {
      "hour": "Thursday 31 of October at 11:00 PM",
      "temp": 13.0,
      "precipitation_probability": 0,
      "cloud_cover": 100,
      "visibility": 24120.0
    }
  ],
  "current": {
    "time": "2024-10-30T15:15",
    "apparent_temperature": 11.3,
    "is_day": 1,
    "rain": 0.0,
    "showers": 0.0,
    "snowfall": 0.0,
    "current_time": "Wednesday 30 of October at 3:15 PM"
  },
  "timezone": "Europe/London"
}

Summarizing the weather report

This is a lot of information to ask an LLM to process in a single prompt. But since we’re running our LLM locally, we can use it as often as we need. Why not ask it to provide separate summaries for the current, daily, and hourly information, before then asking the LLM to create the final weather report for us? Some details might be lost in this summarization, but because we still have the original data, this is something we could refine with future testing.

While we need to call the LLM three times, we can reuse the same function. This function will take the report time and the specific weather period, returning a concise, two-sentence summary of the data.

1def sum_weather(weather, time):
2
3    sum = ollama.Client()
4
5    messages = [
6        {
7            'role': 'system',
8            'content': "Without repeating the request, make a two sentence summary of the provided weather information. rain is last hour in mm, snow is cm, temperature in celsius. if current_time is provided, reference it."
9        },
10        {
11            'role': 'user',
12            'content': f"As of {time} the weather data is {json.dumps(weather)}"
13        }
14    ]
15
16    resp = sum.chat(model='llama3.1', messages=messages)
17    return resp['message']['content']
18

Next, we call it three times, making sure to include the date and time along with the period we want a weather summary for.

1forecast = weather_lat_long(coords)
2
3report_time = forecast['current']['current_time']
4
5current = sum_weather(forecast['current'], report_time)
6daily = sum_weather(forecast['daily'], report_time)
7hourly = sum_weather(forecast['hourly'], report_time)
8
9print(current)
10print(daily)
11print(hourly)
12

And running the command gets us this:

% python step_04.py
Asking: what is the capital of scotland? Who is the green muppet?
{
  "location": {
    "city": "Edinburgh",
    "country": "United Kingdom",
    "state": "Scotland",
    "weatherperson": " Kermit the Frog"
  }
}
We're testing
(55.9533, -3.1884)
There is currently no rain or snowfall as of Wednesday, but it's expected to be quite warm with an apparent temperature of 11.3°C. As of now (at 3:30 PM on Wednesday), the sky is likely clear given that it's daytime.
As of Wednesday 30th of October at 3:30 PM, the current temperature is a high of 14.1°C and a low of 10.0°C, with no rain falling in the last hour. There has been no rain on either Tuesday or Wednesday, but Thursday's forecast indicates a chance of rain remains zero.
As of Wednesday 30 of October at 3:30 PM, the temperature is around 12°C and there has been no precipitation in the last hour.

The Home Stretch

We're almost there! Thanks for sticking with us. Now we've gone from a human-generated sentence to machine-formatted data to a machine-language output that reads pretty well as a sentence. Now we can combine this all together into the final prompt that we will be using to make our customized weather report.

Choosing your persona

We don't know if a user will provide us with a persona to write with, so let's have a default. Someone whose works are in the public domain, such as the Bard himself, William Shakespeare.

1def the_report(summary, anchor_person = 'William Shakespeare'):
2    client = ollama.Client()
3
4    location = " ".join(summary['location'])
5
6    # catch where we get an empty value
7    if len(anchor_person) < 1:
8        anchor_person = 'William Shakespeare'
9
10    messages = [
11        {
12            'role': 'system',
13            'content': f"Please create a weather report as if {anchor_person} wrote it, based on the weather information provided by the user, for {location}. Mention {location} in the response"
14        },
15        {
16            'role': 'user',
17            'content': json.dumps(summary)
18        }
19    ]
20
21    response = client.chat( model='llama3.1', messages=messages)
22 
23    return response['message']['content']
24

With this included, let's wrap this up and see what the Bard forecasts for us first, before asking a Muppet. So we will generate the report without an anchor provided.

1weather_report = the_report(summary)
2
3print(weather_report)
4

And this is what we get:

% python step_05.py
Asking: what is the capital of scotland? Who is the green muppet?
{
  "location": {
    "city": "Edinburgh",
    "country": "UK",
    "state": "Scotland",
    "weatherperson": "Kermit the Frog"
  }
}
We're testing
(55.9533, -3.1884)
O, fair inhabitants of Edinburgh, Scotland's hallowed shore,
Hear ye the tale of weather that doth unfold once more.
As of this very hour, on Wednesday's eve, I say,
No rain hath fallen, nor snow to mar thy day.

The mercury, in its celestial dance, didst rise high,
To 14.1 degrees of warmth, a gentle, golden sky.
And though the night didst bring a cooling breeze, 'twas still
A comfortable low of 10.0 degrees, to ease the chill.

But lo, dear friends, the forecast doth promise more,
For tomorrow's dawn shall bring warmer conditions galore.
The sun, in all its radiant splendor, shall shine bright,
And Edinburgh's streets, now calm and peaceful, shall take flight.

And thus, as I pen this weather report, on Thursday's eve,
At 4:00 PM, the hour of four, the tale doth unfold once more.
No rain nor snow hath fallen in the past short hour,
The temperature, a gentle 12.3 degrees, doth linger and endure.

But fear not, fair Edinburgh, for the day is young and bright,
And at this very moment, 'tis a pleasant afternoon, with nary a cloud in sight.          A mild 11.2 degrees, with naught but sunshine to impart,
Doth make it a day most lovely, a true joy to the heart.

Thus ends my weather report, a tale of sunshine and of cheer,
For Edinburgh, Scotland's hallowed city, on this autumnal eve so dear.

Great! Now let’s tweak how we generate the report: if we have an anchor person’s name, we’ll pass it along; if not, William will handle it by default.

1if 'weatherperson' in location.keys():
2    weather_report = the_report(summary,location['weatherperson'])
3else:
4    weather_report = the_report(summary)
5
6print(weather_report)
7

And here's Kermit the Frog with the weather:

% python step_05.py
Asking: what is the capital of scotland? Who is the green muppet?
{
  "location": {
    "city": "Edinburgh",
    "country": "United Kingdom",
    "state": "Scotland",
    "weatherperson": "Kermit the Frog"
  }
}
We're testing
(55.9533, -3.1884)
"Hi-ho, Edinburgh! It's your good friend Kermit here with the weather report.

Well, Muppet pals, I've got some great news for you. As of today (Thursday, October 31st), the sun is shining brightly over Scotland's lovely capital city - Edinburgh, Scotland, United Kingdom, that is! And it's not just a little bit of sunshine we're talking about, folks. The current temperature is a crisp 13.9°C with clear skies and excellent visibility.

Now, I know what you're thinking: "Kermit, what about the rain?" Fear not, my friends, because as of today at 4 PM, there hasn't been a single drop in the past hour! And the forecast for tonight? Not a cloud in the sky, I'm afraid (no pun intended). No precipitation is expected, so it's shaping up to be another lovely day and night.

In fact, if we go back to yesterday (Wednesday, October 30th), things were just as pleasant. The temperature had reached a high of 14.1°C with not a spot of rain in sight - and that was the case both last night into this morning and again tonight into tomorrow morning! It's been one dry spell after another for our friends in Edinburgh.

So, all in all, it looks like we're in for a lovely spell of weather here in Scotland's greatest city. And I, for one, couldn't be more pleased!

Now for the final touch: we want to be able to ask about any city and have any chosen person read the weather. To achieve this, we’ll replace the static question with one that’s passed by the user via the command line (assuming the question is properly quoted). Additionally, we need to change test=False in get_coords to allow fetching weather data for cities beyond Edinburgh.

1parser = argparse.ArgumentParser()
2
3parser.add_argument("question", nargs='*', help="Question to ask LLM")
4
5cli_args = parser.parse_args()
6
7question = ' '.join(cli_args.question)
8
9print(f"Asking: {question}")
10

Now we can finally get the meaningful answers we want from our favorite flailing felt amphibian:

% python step_05.py "I'd like the green muppet to get me the weather for city that never sleeps"
Asking: I'd like the green muppet to get me the weather for city that never sleeps
{
  "location": {
    "city": "New York",
    "country": "USA",
    "state": "NY",
    "weatherperson": "Kermit the Frog"
  }
}
Hi-ho, New York NY USA! It's your pal Kermit the Frog here with the weather report!

Well, it looks like things have taken a bit of a turn for the worse in the Big Apple. As I'm hopping through my forecast, I see that it's been raining cats and dogs since Thursday morning, with a precipitation probability of 1! That's a lot of wet stuff falling from the sky.

Currently, as of Wednesday at 12:15 PM, the temperature is a cozy 21.2°C (that's around 70°F for all you humans out there), but let me tell you, it's not exactly sunshine and rainbows outside. The forecast for tomorrow doesn't look too much better, with temperatures only reaching a high of 27.2°C on Thursday.

Now, I know what you're thinking: "Kermit, what about the overnight low?" Well, my friends, the good news is that it's not as chilly as I thought it would be, with a low of 13.5°C. So, all in all, it's been a bit of a soggy spell for New York NY USA.

Stay dry out there, and remember: it's not easy being green... or staying warm when the rain is pouring down!

Now we've got a multi-step tool call using LLMs to both parse user input and refine output before being passed to other models. While a simple walkthrough of the concepts of using tools and function schema to extract machine parsable data from a prompt answer, hopefully this demonstrates some of the power of having natural language interfaces to a system.

Behold!

1import requests
2import json
3import ollama
4import argparse
5
6from geopy.geocoders import Nominatim
7from typing import Tuple, Dict
8from zoneinfo import ZoneInfo
9from datetime import datetime
10
11def get_city(question):
12    client = ollama.Client()
13
14    messages = [
15        {
16            'role': 'system',
17            'content': "Determine which city the user is talking about and provide that information to the get_loc_name tool. If they refer to a famous person, add that name to the weatherperson field."
18        },
19        {
20            'role': 'user',
21            'content': question
22        }
23    ]
24
25    response = client.chat(
26        model='llama3.1',
27        messages=messages,
28        tools=[
29        {
30            'type': 'function',
31            'function': {
32                'name': 'get_loc_name',
33                'description': 'Get the coordinates for a city',
34                'parameters': {
35                    'type': 'object',
36                    'properties': {
37                    'city': {
38                        'type': 'string',
39                        'description': 'The city to get the weather for',
40                    },
41                    'state': {
42                        'type': 'string',
43                        'description': 'The state, county, or region the city is in',
44                    },
45                    'country': {
46                        'type': 'string',
47                        'description': 'The country that the city is in',
48                    },
49                    'weatherperson': {
50                        'type': 'string',
51                        'description': 'The famous person who may be referenced in the question',
52                    },
53                    },
54                    'required': ['city', 'state', 'country'],
55                },
56                },
57            },
58        ],
59    )
60
61    # this is a simple structured object, we're not going to make a complete class here
62    output = {
63        'location': [],
64    }
65
66    # llm didn't return a response with a tool call
67    if not response['message'].get('tool_calls'):
68        print("The model didn't use the function. Its response was:")
69        return output
70
71    # Add the formatted data from tool calls to the structured output
72    if response['message'].get('tool_calls'):
73        output = {}
74
75        for tool in response['message']['tool_calls']:
76            if tool['function']['name'] == 'get_loc_name':
77                args = tool['function']['arguments']
78                output['location']          = args
79                
80        return output
81
82
83def get_coords(city: str, state: str, country: str, test: bool = True) -> Tuple[float, float]:
84    if test:
85        print("We're testing")
86        return (55.9533, -3.1884)
87    else:
88        app = Nominatim(user_agent="sandgarden_tutorial")
89        address = f"{city}, {state}, {country}"
90        loc = app.geocode(address)
91
92        return (round(loc.latitude, 4),round(loc.longitude, 4))
93
94def timestamp(iso: str, tz: str) -> datetime:
95    dt = datetime.fromisoformat(iso)
96    tz = ZoneInfo(tz)
97    
98    return dt.replace(tzinfo=tz)
99
100
101def weather_lat_long(coords: Tuple) -> Dict:
102    params = {
103        "latitude": coords[0],
104        "longitude": coords[1],
105        "current": ["apparent_temperature", "is_day", "rain", "showers", "snowfall"],
106        "hourly": ["temperature_2m", "precipitation_probability", "cloud_cover", "visibility"],
107        "daily": ["temperature_2m_max", "temperature_2m_min", "sunrise", "sunset", "precipitation_sum", "precipitation_hours", "precipitation_probability_max"],
108        "timezone": "auto",
109        "forecast_days": 2
110    }
111    
112    url = "https://api.open-meteo.com/v1/forecast"
113    r = requests.get(url,params=params).json()
114
115    current = r['current']
116    current.pop('interval')
117    # we want a "now" datetime object, this makes finding older entries easier
118    report_time = timestamp(r['current']['time'], r['timezone'])
119    # since we're passing this data to a language model that is bad at math
120    # we are formatting the dates as sentences instead of ISO format
121    current['current_time'] = report_time.strftime('%A %-d of %B at %-I:%M %p')
122
123    weather = {
124        'daily': [],
125        'hourly': [],
126        'current': current,
127        'timezone': r['timezone']
128    }
129
130    daily = r['daily']
131    hourly = r['hourly']
132
133    for ind, x in enumerate(daily['time']):
134        date = timestamp(daily['time'][ind], r['timezone'])
135        
136        sunrise = timestamp(daily['sunrise'][ind], r['timezone'])
137        sunset = timestamp(daily['sunset'][ind], r['timezone'])
138
139        tmp = {
140            'date': date.strftime('%A %-d of %B'),
141            'high_temp': daily['temperature_2m_max'][ind],
142            'low_temp': daily['temperature_2m_min'][ind],
143            'sunrise': sunrise.strftime('%A %-d of %B at %-I:%M:%S %p'),
144            'sunset': sunset.strftime('%A %-d of %B at %-I:%M:%S %p'),
145            'rain_amount_mm': daily['precipitation_sum'][ind],
146            'hours_rain': daily['precipitation_hours'][ind],
147            'chance_rain': daily['precipitation_probability_max'][ind]
148        }
149        weather['daily'].append(tmp)
150
151    for ind, x in enumerate(hourly['time']):
152        hour = timestamp(hourly['time'][ind], r['timezone'])
153
154        # We don't need to pass old data to the LLM, it will just make things more complicated on ourselves
155        # because of our timestamp function, we accurate datetime objects that can be easily compared
156        if hour > report_time:
157            tmp = {
158                'hour': hour.strftime('%A %-d of %B at %-I:%M %p'),
159                'temp': hourly['temperature_2m'][ind],
160                'precipitation_probability': hourly['precipitation_probability'][ind],
161                'cloud_cover': hourly['cloud_cover'][ind],
162                'visibility': hourly['visibility'][ind]
163            }
164            weather['hourly'].append(tmp)
165
166    return weather
167
168def sum_weather(weather, time):
169
170    sum = ollama.Client()
171
172    messages = [
173        {
174            'role': 'system',
175            'content': "Without repeating the request, make a two sentence summary of the provided weather information. rain is last hour in mm, snow is cm, temperature in celsius. if current_time is provided, reference it."
176        },
177        {
178            'role': 'user',
179            'content': f"As of {time} the weather data is {json.dumps(weather)}"
180        }
181    ]
182
183    resp = sum.chat(model='llama3.1', messages=messages)
184    return resp['message']['content']
185
186def the_report(summary, anchor_person = 'William Shakespeare'):
187    client = ollama.Client()
188
189    location = " ".join(summary['location'])
190
191    # catch where we get an empty value
192    if len(anchor_person) < 1:
193        anchor_person = 'William Shakespeare'
194
195    messages = [
196        {
197            'role': 'system',
198            'content': f"Please create a weather report as if {anchor_person} wrote it, based on the weather information provided by the user, for {location}. Mention {location} in the response"
199        },
200        {
201            'role': 'user',
202            'content': json.dumps(summary)
203        }
204    ]
205
206    response = client.chat( model='llama3.1', messages=messages)
207 
208    return response['message']['content']
209
210
211parser = argparse.ArgumentParser()
212
213parser.add_argument("question", nargs='*', help="Question to ask LLM")
214
215cli_args = parser.parse_args()
216
217question = ' '.join(cli_args.question)
218
219print(f"Asking: {question}")
220
221answer = get_city(question)
222
223print(json.dumps(answer, indent=2))
224
225location = answer['location']
226
227coords = get_coords(location['city'], location['state'], location['country'], test=True)
228
229# print(coords)
230
231forecast = weather_lat_long(coords)
232
233report_time = forecast['current']['current_time']
234
235summary = {
236        'daily': sum_weather(forecast['daily'], report_time),
237        'hourly': sum_weather(forecast['hourly'], report_time),
238        'current': sum_weather(forecast['current'], report_time),
239        'location': [location['city'], location['state'], location['country']]
240    }
241
242if 'weatherperson' in location.keys():
243    weather_report = the_report(summary,location['weatherperson'])
244else:
245    weather_report = the_report(summary)
246
247print(weather_report)

Bringing Forecasts (and Possibilities) to Life

Congratulations on building a weather bot that doesn’t just inform—it captivates. You’ve created a tool that delivers forecasts with flair, turning routine weather checks into fun and entertaining interactions that are, perhaps most importantly, delivered fast and accurately. Now, you’re equipped to harness real-time data, leverage local LLMs, and customize responses that adapt to any voice you choose.

Furthermore, this weather bot is just the start of how AI can elevate everyday experiences into dynamic, impactful interactions. With infrastructure designed to support seamless deployment, integration, and scaling across enterprise needs, Sandgarden can bring your unique vision to life efficiently and with real impact. Imagine the possibilities.

So go ahead—make it rain, shine, or narrate in iambic pentameter. With Sandgarden, this is only the beginning.

/
Guides