Hex2vec embedder
In [1]:
Copied!
from pytorch_lightning import seed_everything
from srai.embedders import Hex2VecEmbedder
from srai.joiners import IntersectionJoiner
from srai.loaders import OSMOnlineLoader
from srai.neighbourhoods import H3Neighbourhood
from srai.plotting import plot_numeric_data, plot_regions
from srai.regionalizers import H3Regionalizer, geocode_to_region_gdf
from pytorch_lightning import seed_everything
from srai.embedders import Hex2VecEmbedder
from srai.joiners import IntersectionJoiner
from srai.loaders import OSMOnlineLoader
from srai.neighbourhoods import H3Neighbourhood
from srai.plotting import plot_numeric_data, plot_regions
from srai.regionalizers import H3Regionalizer, geocode_to_region_gdf
In [2]:
Copied!
SEED = 71
seed_everything(SEED)
SEED = 71
seed_everything(SEED)
Seed set to 71
Out[2]:
71
Load data from OSM¶
First use geocoding to get the area
In [3]:
Copied!
area_gdf = geocode_to_region_gdf("Wrocław, Poland")
plot_regions(area_gdf, tiles_style="CartoDB positron")
area_gdf = geocode_to_region_gdf("Wrocław, Poland")
plot_regions(area_gdf, tiles_style="CartoDB positron")
Out[3]:
Make this Notebook Trusted to load map: File -> Trust Notebook
Next, download the data for the selected region and the specified tags. We're using OSMOnlineLoader
here, as it's faster for low numbers of tags. In a real life scenario with more tags, you would likely want to use the OSMPbfLoader
.
In [4]:
Copied!
tags = {
"leisure": "park",
"landuse": "forest",
"amenity": ["bar", "restaurant", "cafe"],
"water": "river",
"sport": "soccer",
}
loader = OSMOnlineLoader()
features_gdf = loader.load(area_gdf, tags)
folium_map = plot_regions(area_gdf, colormap=["rgba(0,0,0,0)"], tiles_style="CartoDB positron")
features_gdf.explore(m=folium_map)
tags = {
"leisure": "park",
"landuse": "forest",
"amenity": ["bar", "restaurant", "cafe"],
"water": "river",
"sport": "soccer",
}
loader = OSMOnlineLoader()
features_gdf = loader.load(area_gdf, tags)
folium_map = plot_regions(area_gdf, colormap=["rgba(0,0,0,0)"], tiles_style="CartoDB positron")
features_gdf.explore(m=folium_map)
Out[4]:
Make this Notebook Trusted to load map: File -> Trust Notebook
Prepare the data for embedding¶
After downloading the data, we need to prepare it for embedding. Namely - we need to regionalize the selected area, and join the features with regions.
In [5]:
Copied!
regionalizer = H3Regionalizer(resolution=9)
regions_gdf = regionalizer.transform(area_gdf)
plot_regions(regions_gdf, tiles_style="CartoDB positron")
regionalizer = H3Regionalizer(resolution=9)
regions_gdf = regionalizer.transform(area_gdf)
plot_regions(regions_gdf, tiles_style="CartoDB positron")
Out[5]:
Make this Notebook Trusted to load map: File -> Trust Notebook
In [6]:
Copied!
joiner = IntersectionJoiner()
joint_gdf = joiner.transform(regions_gdf, features_gdf)
joint_gdf
joiner = IntersectionJoiner()
joint_gdf = joiner.transform(regions_gdf, features_gdf)
joint_gdf
Out[6]:
region_id | feature_id |
---|---|
891e2040897ffff | node/280727473 |
891e2040d4bffff | node/300461026 |
node/300461036 | |
891e2040d5bffff | node/300461042 |
891e2040887ffff | node/300461045 |
... | ... |
891e2042053ffff | way/1360073315 |
891e2042637ffff | way/1360073315 |
891e20420cbffff | way/1360073315 |
891e20420dbffff | way/1360073315 |
891e20420c3ffff | way/1360073315 |
4065 rows × 0 columns
Embedding¶
After preparing the data we can proceed with generating embeddings for the regions.
In [7]:
Copied!
import warnings
neighbourhood = H3Neighbourhood(regions_gdf)
embedder = Hex2VecEmbedder([15, 10])
with warnings.catch_warnings():
warnings.simplefilter("ignore")
embeddings = embedder.fit_transform(
regions_gdf,
features_gdf,
joint_gdf,
neighbourhood,
trainer_kwargs={"max_epochs": 5, "accelerator": "cpu"},
batch_size=100,
)
embeddings
import warnings
neighbourhood = H3Neighbourhood(regions_gdf)
embedder = Hex2VecEmbedder([15, 10])
with warnings.catch_warnings():
warnings.simplefilter("ignore")
embeddings = embedder.fit_transform(
regions_gdf,
features_gdf,
joint_gdf,
neighbourhood,
trainer_kwargs={"max_epochs": 5, "accelerator": "cpu"},
batch_size=100,
)
embeddings
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
| Name | Type | Params | Mode ----------------------------------------------- 0 | encoder | Sequential | 280 | train ----------------------------------------------- 280 Trainable params 0 Non-trainable params 280 Total params 0.001 Total estimated model params size (MB) 4 Modules in train mode 0 Modules in eval mode
`Trainer.fit` stopped: `max_epochs=5` reached.
Out[7]:
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|
region_id | ||||||||||
891e2040dc3ffff | 0.656268 | 0.134417 | 0.163386 | 0.327215 | 0.097115 | -0.663489 | 1.178271 | -0.628505 | 0.300285 | -0.507696 |
891e2051b2fffff | -0.317609 | 0.008068 | -0.308102 | -0.471363 | 0.017219 | 0.221449 | -0.371562 | 0.344145 | -0.085226 | -0.019029 |
891e2047533ffff | -0.610329 | 0.481623 | 0.088298 | -0.091139 | -0.337330 | 0.287571 | 0.079169 | 0.091325 | 0.388366 | -0.183664 |
891e2055b5bffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 |
891e2042b77ffff | 0.172878 | -0.418366 | 0.489317 | -0.486085 | -0.418959 | 0.046527 | 0.769972 | -0.060110 | 0.081965 | -0.888421 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
891e204045bffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 |
891e20454a3ffff | 0.087222 | 0.140724 | 0.454357 | 0.591962 | -0.135180 | 0.023367 | 0.258327 | -0.415607 | 0.333281 | 0.162284 |
891e20474a3ffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 |
891e204e517ffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 |
891e2040497ffff | -0.442270 | 0.109991 | -0.376834 | -0.603973 | -0.056354 | 0.253124 | -0.335904 | 0.426172 | -0.112183 | -0.158308 |
3168 rows × 10 columns
Visualizing the embeddings' similarity¶
In [8]:
Copied!
from sklearn.cluster import KMeans
clusterizer = KMeans(n_clusters=5, random_state=SEED)
clusterizer.fit(embeddings)
embeddings["cluster"] = clusterizer.labels_
embeddings
from sklearn.cluster import KMeans
clusterizer = KMeans(n_clusters=5, random_state=SEED)
clusterizer.fit(embeddings)
embeddings["cluster"] = clusterizer.labels_
embeddings
Out[8]:
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | cluster | |
---|---|---|---|---|---|---|---|---|---|---|---|
region_id | |||||||||||
891e2040dc3ffff | 0.656268 | 0.134417 | 0.163386 | 0.327215 | 0.097115 | -0.663489 | 1.178271 | -0.628505 | 0.300285 | -0.507696 | 4 |
891e2051b2fffff | -0.317609 | 0.008068 | -0.308102 | -0.471363 | 0.017219 | 0.221449 | -0.371562 | 0.344145 | -0.085226 | -0.019029 | 2 |
891e2047533ffff | -0.610329 | 0.481623 | 0.088298 | -0.091139 | -0.337330 | 0.287571 | 0.079169 | 0.091325 | 0.388366 | -0.183664 | 2 |
891e2055b5bffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 | 1 |
891e2042b77ffff | 0.172878 | -0.418366 | 0.489317 | -0.486085 | -0.418959 | 0.046527 | 0.769972 | -0.060110 | 0.081965 | -0.888421 | 0 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
891e204045bffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 | 1 |
891e20454a3ffff | 0.087222 | 0.140724 | 0.454357 | 0.591962 | -0.135180 | 0.023367 | 0.258327 | -0.415607 | 0.333281 | 0.162284 | 4 |
891e20474a3ffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 | 1 |
891e204e517ffff | 0.339083 | -0.240901 | -0.030583 | 0.259178 | 0.246003 | -0.122454 | -0.278094 | -0.106361 | -0.241188 | 0.372357 | 1 |
891e2040497ffff | -0.442270 | 0.109991 | -0.376834 | -0.603973 | -0.056354 | 0.253124 | -0.335904 | 0.426172 | -0.112183 | -0.158308 | 2 |
3168 rows × 11 columns