Facility: 002691
Pioneer Heated Storage
- Facility ID
- 002691
- Name
- Pioneer Heated Storage
- URL
- https://pioneerheatedstorage.com/storage-options/
- Address
- 109 Pioneer Rd, Long Beach, WA 98631, USA, Long Beach, Washington 98631
- Platform
- custom_facility_002691
- Parser File
- src/parsers/custom/facility_002691_parser.py
- Last Scraped
- 2026-03-27 13:54:39.511516
- Created
- 2026-03-14 16:21:53.706708
- Updated
- 2026-03-27 13:54:39.539912
- Parser Status
- ✓ Working
- Status Reason
- N/A
- Last Healing Attempt
- Not attempted
Parser Source (src/parsers/custom/facility_002691_parser.py)
"""Parser for Pioneer Heated Storage (facility 002691).
The storage-options page uses a tabbed layout (GenerateBlocks tabs).
The "All UNITS" tab panel lists every unit as a card with:
- size headline (e.g. "5' x 10'")
- square / cubic feet
- short description
- a "Request" button (no prices published)
Prices are not displayed; the site asks customers to call for
availability. We extract size and description and leave price as None.
"""
from __future__ import annotations
import re
from bs4 import BeautifulSoup
from src.parsers.base import BaseParser, ParseResult, UnitResult
class Facility002691Parser(BaseParser):
"""Extract storage units from Pioneer Heated Storage."""
platform = "custom_facility_002691"
# Matches sizes like 5' x 10' or 10′ x 20′
_SIZE_RE = re.compile(
r"(\d+)\s*['\u2032\u2019]?\s*[xX\u00d7]\s*(\d+)\s*['\u2032\u2019]?"
)
def parse(self, html: str, url: str = "") -> ParseResult:
soup = BeautifulSoup(html, "lxml")
result = ParseResult(
platform=self.platform,
parser_name=self.__class__.__name__,
)
# Locate the first tabpanel (the "All UNITS" tab) to avoid
# counting duplicates from the per-category tabs.
all_units_panel = soup.find(attrs={"role": "tabpanel"})
container = all_units_panel if all_units_panel else soup
# Each unit size lives in a <p> with class gb-headline-text
seen: set[str] = set()
for headline in container.find_all(
"p", class_=lambda c: c and "gb-headline-text" in c
):
text = headline.get_text(strip=True)
m = self._SIZE_RE.search(text)
if not m:
continue
size_text = f"{m.group(1)}x{m.group(2)}"
if size_text in seen:
continue
seen.add(size_text)
unit = UnitResult()
unit.size = text # preserve original formatting
w, ln, sq = self.normalize_size(size_text)
if w is not None:
unit.metadata = {"width": w, "length": ln, "sqft": sq}
# Walk sibling paragraphs for sqft / description context
parent_col = headline.find_parent(
"div", class_=lambda c: c and "gb-grid-wrapper" in c
)
if parent_col:
paras = parent_col.find_all("p")
descs = []
for p in paras:
p_text = p.get_text(strip=True)
if p_text and p_text != text:
descs.append(p_text)
if descs:
unit.description = " | ".join(descs)
# Price is not published on this site
unit.price = None
result.units.append(unit)
if not result.units:
result.warnings.append(
"No units found -- page structure may have changed"
)
return result
Scrape Runs (5)
-
exported Run #18832026-03-27 13:54:36.596056 | 5 units | Facility002691Parser | View Data →
-
exported Run #18822026-03-27 13:54:36.540619 | 5 units | Facility002691Parser | View Data →
-
exported Run #12042026-03-23 02:55:55.441020 | Facility002691Parser
-
exported Run #7112026-03-21 18:47:19.004145 | 5 units | Facility002691Parser | View Data →
-
exported Run #2602026-03-14 16:29:09.642455 | Facility002691Parser
Run #1882 Details
- Status
- exported
- Parser Used
- Facility002691Parser
- Platform Detected
- table_layout
- Units Found
- 5
- Stage Reached
- exported
- Timestamp
- 2026-03-27 13:54:36.540619
Timing
| Stage | Duration |
|---|---|
| Fetch | 2881ms |
| Detect | 20ms |
| Parse | 11ms |
| Export | 19ms |
Snapshot: 002691_20260327T135439Z.html · Show Snapshot · Open in New Tab
Parsed Units (5)
5′ x 10′
No price
5′ x 15′
No price
10′ x 10′
No price
10′ x 15′
No price
10′ x 20′
No price
All Failures for this Facility (2)
parse
_WarningAsException
scraper
no_units_extracted
warning
Run #N/A | 2026-03-23 02:55:57.306709
No units extracted for 002691
Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 002691
parse
_WarningAsException
scraper
no_units_extracted
warning
Run #N/A | 2026-03-14 16:29:13.617320
No units extracted for 002691
Stack trace
src.reporting.failure_reporter._WarningAsException: No units extracted for 002691