Welcome, guest | Sign In | My Account | Store | Cart

Given a list of cities, this recipe fetches their latitudes and longitudes from one website (a database used for astrology, of all things) and uses them to build a URL for another website which creates a map highlighting the cities against the outlines of continents. Maybe some day it will be clever enough to load the latitudes and longitudes as waypoints into your GPS receiver.

Python, 119 lines
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
import string, urllib, re, os, exceptions

JUST_THE_US = 0

class CityNotFound(exceptions.Exception):
    pass

def xerox_parc_url(marklist):
    avg_lat, avg_lon = max_lat, max_lon = marklist[0]
    marks = "%f,%f" % marklist[0]
    for lat, lon in marklist[1:]:
        marks = marks + ";%f,%f" % (lat, lon)
        avg_lat = avg_lat + lat
        avg_lon = avg_lon + lon
        if lat > max_lat: max_lat = lat
        if lon > max_lon: max_lon = lon
    avg_lat = avg_lat / len(marklist)
    avg_lon = avg_lon / len(marklist)
    if len(marklist) == 1:
        max_lat, max_lon = avg_lat + 1, avg_lon + 1
    diff = max(max_lat - avg_lat, max_lon - avg_lon)
    D = {'height': 4 * diff,
         'width': 4 * diff,
         'lat': avg_lat, 'lon': avg_lon,
         'marks': marks}
    if JUST_THE_US:
        url = ("http://pubweb.parc.xerox.com/map/db=usa/ht=%(height)f" +
               "/wd=%(width)f/color=1/mark=%(marks)s/lat=%(lat)f/" +
               "lon=%(lon)f/") % D
    else:
        url = ("http://pubweb.parc.xerox.com/map/color=1/ht=%(height)f" +
               "/wd=%(width)f/color=1/mark=%(marks)s/lat=%(lat)f/" +
               "lon=%(lon)f/") % D
    return url

"""Presumably the intent of the cookbook is largely educational. I should
therefore illuminate something I did really pretty badly in this function.
Notice the ridiculous clumsiness of the "for x in inf.readlines()" loop,
which is utterly and stupidly dependent upon the specific format of the
HTML page returned by the www.astro.ch site. If that format ever changes,
the function breaks. If I had been clever and used htmllib.HTMLParser, I
might have been a bit more immune to modest format changes. If my motivation
persists, I might take a stab at that. No promises."""

def findcity(city, state):
    Please_click = re.compile("Please click")
    city_re = re.compile(city)
    state_re = re.compile(state)
    url = ("""http://www.astro.ch/cgi-bin/atlw3/aq.cgi?expr=%s&lang=e"""
           % (string.replace(city, " ", "+") + "%2C+" + state))
    lst = [ ]
    found_please_click = 0
    inf = urllib.FancyURLopener().open(url)
    for x in inf.readlines():
        x = x[:-1]
        if Please_click.search(x) != None:
            # here is one assumption about unchanging structure
            found_please_click = 1
        if (city_re.search(x) != None and
            state_re.search(x) != None and
            found_please_click):
            # pick apart the HTML pieces
            L = [ ]
            for y in string.split(x, '<'):
                L = L + string.split(y, '>')
            # discard any pieces of zero length
            L = filter(lambda x: len(x) > 0, L)
            lst.append(L)
    inf.close()
    try:
        # here's another few assumptions
        x = lst[0]
        lat, lon = x[6], x[10]
    except IndexError:
        raise CityNotFound
    def getdegrees(x, dividers):
        def myeval(x):
            if len(x) == 2 and x[0] == "0":
                return eval(x[1])
            return eval(x)
        if string.count(x, dividers[0]):
            x = map(myeval, string.split(x, dividers[0]))
            return x[0] + (x[1] / 60.)
        elif string.count(x, dividers[1]):
            x = map(myeval, string.split(x, dividers[1]))
            return -(x[0] + (x[1] / 60.))
        else:
            raise "Bogus result", x
    return getdegrees(lat, "ns"), getdegrees(lon, "ew")

def showcities(citylist):
    marklist = [ ]
    for city, state in citylist:
        try:
            lat, lon = findcity(city, state)
            print ("%s, %s:" % (city, state)), lat, lon
            marklist.append((lat, lon))
        except CityNotFound:
            print "%s, %s: not in database?" % (city, state)
    url = xerox_parc_url(marklist)
    # print url
    os.system('netscape "%s"' % url)

citylist = (("Natick", "MA"),
            ("Rhinebeck", "NY"),
            ("New Haven", "CT"),
            ("King of Prussia", "PA"))

citylist1 = (("Mexico City", "Mexico"),
             ("Acapulco", "Mexico"),
             ("Abilene", "Texas"),
             ("Tulum", "Mexico"))

citylist2 = (("Munich", "Germany"),
             ("London", "England"),
             ("Madrid", "Spain"),
             ("Paris", "France"))

showcities(citylist1)

6 comments

Steve Pike 22 years, 8 months ago  # | flag

Fix for '302' error. The statement

inf = urllib.URLopener().open(url)

needs replacing with

inf = urllib.FancyURLopener().open(url)

to prevent an exception being raised due to the site returning an HTML code '302' (redirected).

Tim Allen 22 years, 5 months ago  # | flag

Illegal use of their site. The owners of the site you are using to access the lat/long database are very specific about this: "Automatic access by query-generating software is not acceptable and will be considered illegal".

Joel Lawhead 21 years, 9 months ago  # | flag

You're right. You're right. This program could use Microsoft TerraServer to do the same thing legally.

Glenn Meader 20 years, 8 months ago  # | flag

Legal way to get Lat Long from Terraserver. GetPlaceList is a WebServices function accessible using the either HTTP or SOAP protocols. It returns XML formatted info (including LatLong) about places that match the name given.

Here's a demo page with the details on access:

http://terraserver.microsoft.net/TerraService.asmx?op=GetPlaceList

Will Ware 20 years, 4 months ago  # | flag

accessing Terraserver. Thanks for the pointer to Terraserver. Here is an update of the findcity function. The PARC map server is long since 404-ed, so the rest of the recipe is useless. The getText() function was very hastily modified from the one appearing in the Python docs for xml.dom.minidom, and could probably be improved. But it's nice to see how elegantly the minidom stuff works.

def findcity(city, state):
    def getText(nodelist):
        rc = ""
        for node in nodelist:
            if node.nodeType == node.TEXT_NODE:
                rc = rc + node.data
            elif node.hasChildNodes():
                rc = rc + getText(node.childNodes)
        return rc
    url = (("http://terraserver.microsoft.net/TerraService.asmx/GetPlaceList?" +
            "placeName=%s&MaxItems=1&imagePresence=false")
           % (string.replace(city, " ", "+") + "%2C+" + state))
    inf = urllib.FancyURLopener().open(url)
    dom = xml.dom.minidom.parse(inf)
    inf.close()
    placeFacts = dom.getElementsByTagName("PlaceFacts")
    center = placeFacts[0].getElementsByTagName("Center")
    lat = string.atof(getText(center[0].getElementsByTagName("Lat")))
    lon = string.atof(getText(center[0].getElementsByTagName("Lon")))
    return lon, lat
a 13 years, 12 months ago  # | flag

string.atof() is deprecated. you should use float() instead