Skip to content

Contextual count embedder

Contextual Count Embedder.

This module contains contextual count embedder implementation from ARIC@SIGSPATIAL 2021 paper [1].

References
  1. https://doi.org/10.1145/3486626.3493434
  2. https://arxiv.org/abs/2111.00990

ContextualCountEmbedder(
    neighbourhood,
    neighbourhood_distance,
    concatenate_vectors=False,
    expected_output_features=None,
    count_subcategories=False,
    num_of_multiprocessing_workers=-1,
    multiprocessing_activation_threshold=None,
)

Bases: CountEmbedder

ContextualCountEmbedder.

PARAMETER DESCRIPTION
neighbourhood

Neighbourhood object used to get neighbours for the contextualization.

TYPE: Neighbourhood[T]

neighbourhood_distance

How many neighbours levels should be included in the embedding.

TYPE: int

concatenate_vectors

Whether to sum all neighbours into a single vector with the same width as CountEmbedder, or to concatenate them to the wide format and keep all neighbour levels separate. Defaults to False.

TYPE: bool DEFAULT: False

count_subcategories

Whether to count all subcategories individually or count features only on the highest level based on features column name. Defaults to False.

TYPE: bool DEFAULT: False

num_of_multiprocessing_workers

Number of workers used for multiprocessing. Defaults to -1 which results in a total number of available cpu threads. 0 and 1 values disable multiprocessing. Similar to n_jobs parameter from scikit-learn library.

TYPE: int DEFAULT: -1

multiprocessing_activation_threshold

Number of seeds required to start processing on multiple processes. Activating multiprocessing for a small amount of points might not be feasible. Defaults to 100.

TYPE: int DEFAULT: None

RAISES DESCRIPTION
ValueError

If neighbourhood_distance is negative.

Source code in srai/embedders/contextual_count_embedder.py
def __init__(
    self,
    neighbourhood: Neighbourhood[IndexType],
    neighbourhood_distance: int,
    concatenate_vectors: bool = False,
    expected_output_features: Optional[
        Union[list[str], OsmTagsFilter, GroupedOsmTagsFilter]
    ] = None,
    count_subcategories: bool = False,
    num_of_multiprocessing_workers: int = -1,
    multiprocessing_activation_threshold: Optional[int] = None,
) -> None:
    """
    Init ContextualCountEmbedder.

    Args:
        neighbourhood (Neighbourhood[T]): Neighbourhood object used to get neighbours for
            the contextualization.
        neighbourhood_distance (int): How many neighbours levels should be included in
            the embedding.
        concatenate_vectors (bool, optional): Whether to sum all neighbours into a single vector
            with the same width as `CountEmbedder`, or to concatenate them to the wide format
            and keep all neighbour levels separate. Defaults to False.
        expected_output_features
            (Union[List[str], OsmTagsFilter, GroupedOsmTagsFilter], optional):
            The features that are expected to be found in the resulting embedding.
            If not None, the missing features are added and filled with 0.
            The unexpected features are removed. The resulting columns are sorted accordingly.
            Defaults to None.
        count_subcategories (bool, optional): Whether to count all subcategories individually
            or count features only on the highest level based on features column name.
            Defaults to False.
        num_of_multiprocessing_workers (int, optional): Number of workers used for
            multiprocessing. Defaults to -1 which results in a total number of available
            cpu threads. `0` and `1` values disable multiprocessing.
            Similar to `n_jobs` parameter from `scikit-learn` library.
        multiprocessing_activation_threshold (int, optional): Number of seeds required to start
            processing on multiple processes. Activating multiprocessing for a small
            amount of points might not be feasible. Defaults to 100.

    Raises:
        ValueError: If `neighbourhood_distance` is negative.
    """
    super().__init__(expected_output_features, count_subcategories)

    self.neighbourhood = neighbourhood
    self.neighbourhood_distance = neighbourhood_distance
    self.concatenate_vectors = concatenate_vectors

    if self.neighbourhood_distance < 0:
        raise ValueError("Neighbourhood distance must be positive.")

    self.num_of_multiprocessing_workers = _parse_num_of_multiprocessing_workers(
        num_of_multiprocessing_workers
    )
    self.multiprocessing_activation_threshold = _parse_multiprocessing_activation_threshold(
        multiprocessing_activation_threshold
    )

transform(regions_gdf, features_gdf, joint_gdf)

Embed a given GeoDataFrame.

Creates region embeddings by counting the frequencies of each feature value and applying a contextualization based on neighbours of regions. For each region, features will be altered based on the neighbours either by adding averaged values dimished based on distance, or by adding new separate columns with neighbour distance postfix. Expects features_gdf to be in wide format with each column being a separate type of feature (e.g. amenity, leisure) and rows to hold values of these features for each object. The rows will hold numbers of this type of feature in each region. Numbers can be fractional because neighbourhoods are averaged to represent a single value from all neighbours on a given level.

PARAMETER DESCRIPTION
regions_gdf

Region indexes and geometries.

TYPE: GeoDataFrame

features_gdf

Feature indexes, geometries and feature values.

TYPE: GeoDataFrame

joint_gdf

Joiner result with region-feature multi-index.

TYPE: GeoDataFrame

RETURNS DESCRIPTION
DataFrame

pd.DataFrame: Embedding for each region in regions_gdf.

RAISES DESCRIPTION
ValueError

If features_gdf is empty and self.expected_output_features is not set.

ValueError

If any of the gdfs index names is None.

ValueError

If joint_gdf.index is not of type pd.MultiIndex or doesn't have 2 levels.

ValueError

If index levels in gdfs don't overlap correctly.

Source code in srai/embedders/contextual_count_embedder.py
def transform(
    self,
    regions_gdf: gpd.GeoDataFrame,
    features_gdf: gpd.GeoDataFrame,
    joint_gdf: gpd.GeoDataFrame,
) -> pd.DataFrame:
    """
    Embed a given GeoDataFrame.

    Creates region embeddings by counting the frequencies of each feature value and applying
    a contextualization based on neighbours of regions. For each region, features will be
    altered based on the neighbours either by adding averaged values dimished based on distance,
    or by adding new separate columns with neighbour distance postfix.
    Expects features_gdf to be in wide format with each column being a separate type of
    feature (e.g. amenity, leisure) and rows to hold values of these features for each object.
    The rows will hold numbers of this type of feature in each region. Numbers can be
    fractional because neighbourhoods are averaged to represent a single value from
    all neighbours on a given level.

    Args:
        regions_gdf (gpd.GeoDataFrame): Region indexes and geometries.
        features_gdf (gpd.GeoDataFrame): Feature indexes, geometries and feature values.
        joint_gdf (gpd.GeoDataFrame): Joiner result with region-feature multi-index.

    Returns:
        pd.DataFrame: Embedding for each region in regions_gdf.

    Raises:
        ValueError: If features_gdf is empty and self.expected_output_features is not set.
        ValueError: If any of the gdfs index names is None.
        ValueError: If joint_gdf.index is not of type pd.MultiIndex or doesn't have 2 levels.
        ValueError: If index levels in gdfs don't overlap correctly.
    """
    counts_df = super().transform(regions_gdf, features_gdf, joint_gdf)

    result_df: pd.DataFrame
    if self.concatenate_vectors:
        result_df = self._get_concatenated_embeddings(counts_df)
    else:
        result_df = self._get_squashed_embeddings(counts_df)

    return result_df