Facility: 001106
Prairie View Storage
- Facility ID
- 001106
- Name
- Prairie View Storage
- URL
- http://www.prairieviewstoragellc.com/
- Address
- 1016 Prairie Rd, Wilmington, OH 45177, USA, Wilmington, Ohio 45177
- Platform
- custom_facility_001106
- Parser File
- src/parsers/custom/facility_001106_parser.py
- Last Scraped
- 2026-03-27 13:39:20.000164
- Created
- 2026-03-23 02:35:08.816820
- Updated
- 2026-03-27 13:39:20.027988
- Parser Status
- ✓ Working
- Status Reason
- N/A
- Last Healing Attempt
- Not attempted
Parser Source (src/parsers/custom/facility_001106_parser.py)
"""Parser for Prairie View Storage (Wix site, no pricing listed)."""
from __future__ import annotations
import re
from bs4 import BeautifulSoup
from src.parsers.base import BaseParser, ParseResult, UnitResult
class Facility001106Parser(BaseParser):
"""Extract storage units from Prairie View Storage.
This Wix-based site lists unit sizes across two locations
(Clinton County and Highland County) but does not publish prices.
Sizes appear inside repeater items as ``<h6>`` elements within ``<ul>`` lists.
"""
platform = "custom_facility_001106"
_SIZE_RE = re.compile(
r"(\d+)\s*(?:ft|'|′)?\s*[xX×]\s*(\d+)\s*(?:ft|'|′)?",
)
def parse(self, html: str, url: str = "") -> ParseResult:
soup = BeautifulSoup(html, "lxml")
result = ParseResult(platform=self.platform, parser_name=self.__class__.__name__)
for tag in soup.find_all(["script", "style"]):
tag.decompose()
# Each repeater item contains a location header and a list of sizes.
repeater_items = soup.find_all(
"div",
attrs={"data-mesh-id": re.compile(r"comp-m19m46zn__item.*inlineContent-gridContainer")},
)
seen: set[tuple[int, int]] = set()
if repeater_items:
for item in repeater_items:
# Determine location name from first text block (h4/h5/h6 or plain text)
lines = [
ln.strip()
for ln in item.get_text(separator="\n").split("\n")
if ln.strip()
]
location = lines[0] if lines else ""
# Extract sizes from <h6> tags inside <ul> lists
for h6 in item.find_all("h6"):
text = h6.get_text(strip=True)
m = self._SIZE_RE.search(text)
if not m:
continue
w, ln_val = int(m.group(1)), int(m.group(2))
if w < 3 or ln_val < 3:
continue
if (w, ln_val) in seen:
continue
seen.add((w, ln_val))
size_str = f"{w}x{ln_val}"
unit = UnitResult()
unit.size = size_str
_, _, sq = self.normalize_size(size_str)
unit.metadata = {"width": w, "length": ln_val, "sqft": sq}
if location:
unit.description = location
result.units.append(unit)
else:
# Fallback: scan all text for size patterns
body_text = soup.get_text(separator="\n")
for m in self._SIZE_RE.finditer(body_text):
w, ln_val = int(m.group(1)), int(m.group(2))
if w < 3 or ln_val < 3:
continue
if (w, ln_val) in seen:
continue
seen.add((w, ln_val))
size_str = f"{w}x{ln_val}"
unit = UnitResult()
unit.size = size_str
_, _, sq = self.normalize_size(size_str)
unit.metadata = {"width": w, "length": ln_val, "sqft": sq}
result.units.append(unit)
if not result.units:
result.warnings.append("No units found")
else:
result.warnings.append("No pricing available on this site — sizes only")
return result
Scrape Runs (3)
Run #1513 Details
- Status
- exported
- Parser Used
- Facility001106Parser
- Platform Detected
- table_layout
- Units Found
- 9
- Stage Reached
- exported
- Timestamp
- 2026-03-27 13:39:15.990095
Timing
| Stage | Duration |
|---|---|
| Fetch | 3898ms |
| Detect | 36ms |
| Parse | 19ms |
| Export | 18ms |
Snapshot: 001106_20260327T133919Z.html · Show Snapshot · Open in New Tab
Parsed Units (9)
6x10
No price
6x12
No price
10x10
No price
10x12
No price
10x15
No price
10x20
No price
10x24
No price
4x8
No price
8x12
No price