Facility: 019385

K&L Storage

Stale Data Warning: This facility has not been successfully scraped in 30 days (threshold: 3 days). Data may be outdated.
Facility Information active
Facility ID
019385
Name
K&L Storage
URL
http://www.kandlstorage.com/
Address
N/A
Platform
custom_facility_019385
Parser File
src/parsers/custom/facility_019385_parser.py
Last Scraped
2026-03-23 03:17:34.047013
Created
2026-03-06 23:45:35.865957
Updated
2026-03-23 03:17:34.047013
Parser & Healing Diagnosis needs_fix
Parser Status
⚠ Needs Fix
Status Reason
Parser returned 0 units
Last Healing Attempt
Not attempted
Parser Source (src/parsers/custom/facility_019385_parser.py)
"""Parser for K&L Storage (Casper, Wyoming).

The pricing page at /locations-pricing/ lists unit size categories as H3 elements
in the format: "LABEL DIMENSIONS – Starting at $PRICE"

Each H3 lives inside a .row > .col-sm-12 div. Categories without explicit
dimensions (e.g., "XTRA LARGE" and "BOAT & RV STORAGE") are represented with
their label as description and no metadata dimensions.
"""

from __future__ import annotations

import re

from bs4 import BeautifulSoup

from src.parsers.base import BaseParser, ParseResult, UnitResult


class Facility019385Parser(BaseParser):
    """Extract storage units from K&L Storage pricing page.

    Pricing is displayed as H3 headings in the format:
        X-SMALL 5' x 5' or Similar  – Starting at $40
        LARGE 10' x 15' - 10' x 20' or Similar  – Starting at $95
        XTRA LARGE  – Starting at $200
        BOAT & RV STORAGE  – Starting at $40
    """

    platform = "custom_facility_019385"

    # Match "Starting at $NNN" price
    _PRICE_RE = re.compile(r"Starting\s+at\s+\$(\d[\d,]*)", re.IGNORECASE)

    # Match first dimension pair like 5' x 5' or 10' x 10'
    _DIM_RE = re.compile(r"(\d+)['\u2019\u2032]\s*[xX]\s*(\d+)['\u2019\u2032]")

    # Match category label (the ALL-CAPS word(s) before any dimension or dash)
    _LABEL_RE = re.compile(r"^([A-Z][A-Z &]+?)(?:\s+\d|\s+–|\s*$)", re.IGNORECASE)

    def parse(self, html: str, url: str = "") -> ParseResult:
        soup = BeautifulSoup(html, "lxml")
        result = ParseResult(platform=self.platform, parser_name=self.__class__.__name__)

        # Find all H3 elements that contain "Starting at $"
        h3_elements = soup.find_all("h3")
        pricing_h3s = [h for h in h3_elements if self._PRICE_RE.search(h.get_text())]

        if not pricing_h3s:
            # May have been given the home page instead of the pricing page
            result.warnings.append(
                "No pricing H3 elements found — snapshot may be the home page "
                "rather than /locations-pricing/"
            )
            return result

        for h3 in pricing_h3s:
            text = h3.get_text(separator=" ", strip=True)

            price_match = self._PRICE_RE.search(text)
            if not price_match:
                continue

            price = self.normalize_price(price_match.group(1))
            unit = UnitResult(price=price)

            # Try to extract the first dimension pair
            dim_matches = self._DIM_RE.findall(text)
            if dim_matches:
                # Use the first (smallest/representative) dimension
                width = float(dim_matches[0][0])
                length = float(dim_matches[0][1])

                # Build a clean size label from all found dimensions
                if len(dim_matches) > 1:
                    # e.g. "10' x 15' - 10' x 20'"
                    dim_parts = [f"{int(w)}' x {int(l)}'" for w, l in dim_matches]
                    unit.size = " - ".join(dim_parts) + " or Similar"
                else:
                    unit.size = f"{int(width)}' x {int(length)}'"

                unit.metadata = {
                    "width": width,
                    "length": length,
                    "sqft": width * length,
                }
            else:
                # No dimensions — use label text as size description
                label_match = self._LABEL_RE.match(text)
                unit.size = label_match.group(1).strip().title() if label_match else text

            unit.description = text
            result.units.append(unit)

        if not result.units:
            result.warnings.append("Pricing H3 elements found but no units extracted")

        return result

Scrape Runs (5)

Run #964 Details

Status
exported
Parser Used
Facility019385Parser
Platform Detected
storageunitsoftware
Units Found
0
Stage Reached
exported
Timestamp
2026-03-21 19:10:22.790052
Timing
Stage Duration
Fetch4458ms
Detect25ms
Parse12ms
Export3ms

Snapshot: 019385_20260321T191027Z.html · Show Snapshot · Open in New Tab

No units found in this run.

All Failures for this Facility (5)

parse _WarningAsException scraper no_units_extracted warning Run #N/A | 2026-03-23 03:17:34.007221

No units extracted for 019385

Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 019385
parse _WarningAsException scraper no_units_extracted warning Run #N/A | 2026-03-21 19:10:27.308385

No units extracted for 019385

Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 019385
parse _WarningAsException scraper no_units_extracted warning Run #N/A | 2026-03-14 16:53:14.242493

No units extracted for 019385

Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 019385
parse _WarningAsException scraper no_units_extracted warning Run #N/A | 2026-03-14 01:04:46.582294

No units extracted for 019385

Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 019385
parse _WarningAsException scraper no_units_extracted warning Run #N/A | 2026-03-13 19:10:11.927980

No units extracted for 019385

Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 019385

← Back to dashboard