Research Log

Autoresearcher iterations — Karpathy-style optimization loop

Total Iterations
200
Kept
0
0% acceptance rate
Valid Patches
0
0% patch success
Avg Improvement
0.0%
p95 latency reduction (kept)

Iteration Performance

Kept
Reverted
API#HypothesisPatchp95 Beforep95 AfterDeltaResultDate
Google Maps Scraper49## Hypothesis The `RedisCache.connect()` method creates the `aioredis` client without specifying a connection pool size, defaulting to a single connection. Under concurrent load, all coroutines queue behind that one connection, adding serialization latency; setting `max_connections` to a larger value (e.g., 20) allows parallel Redis I/O and reduces p95 latency for cache-heavy paths. ## Expected Impact With the current single-connection default, concurrent requests to `/scrape/status/{run_id}`invalidmsms%skip2026-04-14T13:30:55.915897+00:00
Google Maps Scraper48## Hypothesis The `get_job_results` route builds the paginated cache key by string concatenation **after** the `run_results_key()` call (`cache.run_results_key(run_id) + f":{limit}:{offset}"`), which is fine, but more critically the route then calls `service.get_run_results()` which internally calls `self.get_run_status(run_id)` — a **second independent Redis GET plus a potential Apify HTTP call** — even when the results are already cached. For the common p95 case (cached results), the result iinvalidmsms%skip2026-04-14T13:30:31.043424+00:00
Google Maps Scraper47## Hypothesis The `MapsService.__init__` creates a new `ApifyClient` instance on every request (since `MapsService` is instantiated per-request via `Depends(get_maps_service)`), which in turn creates a new `httpx.AsyncClient` lazily — but that client is then discarded after the request, wasting connection pool warmup and preventing HTTP/1.1 keep-alive connection reuse across requests. By making `ApifyClient` a singleton (or attaching it to `app.state` at startup alongside the cache), the underlinvalidmsms%skip2026-04-14T13:29:54.321074+00:00
Google Maps Scraper46## Hypothesis The `ApifyClient._request` method creates and checks the `httpx.AsyncClient` via a property getter on every request, but more critically, each `MapsService` instance creates its own `ApifyClient` with its own underlying HTTP connection pool. Since `MapsService` is instantiated per-request (via FastAPI's `Depends`), every request pays the cost of establishing new TCP/TLS connections to Apify rather than reusing a persistent pool. **Fix:** Promote `ApifyClient` to a module-level siinvalidmsms%skip2026-04-14T13:29:29.272946+00:00
Google Maps Scraper45## Hypothesis The `get_job_status` route performs two sequential Redis operations — `cache.get()` inside `service.get_run_status()` and then a separate `cache.ttl()` call — adding an extra network round-trip on every cached-hit path. By using Redis's `GETEX` or combining the get+TTL into a single pipeline, we eliminate the second round-trip; alternatively, we can use `cache.ttl()` only when the value was actually retrieved from cache (skipping it on upstream-fetch paths where we already know thinvalidmsms%skip2026-04-14T13:29:01.725877+00:00
Google Maps Scraper44## Hypothesis The `RedisCache.connect()` method creates an `aioredis` client without specifying connection pool limits, defaulting to a pool of 10 connections. Under concurrent load, connection pool exhaustion causes queuing at the Redis layer, adding latency to every cache operation. Setting `max_connections` to a higher value (e.g., 50) and enabling `socket_keepalive=True` will reduce connection wait time and avoid TCP reconnect overhead for persistent connections. **Expected impact:** Cacheinvalidmsms%skip2026-04-14T13:28:50.251409+00:00
Google Maps Scraper43## Hypothesis The `MapsService` instantiates a new `ApifyClient` (and thus potentially a new `httpx.AsyncClient`) on **every request** because `MapsService` itself is constructed fresh per-request via `Depends(get_maps_service)`. Sharing a single `ApifyClient` instance (with its persistent connection pool) across all requests eliminates repeated TLS handshakes and connection setup overhead, directly reducing p95 latency for requests that hit the upstream Apify API. ## Expected Impact `httpx.Ainvalidmsms%skip2026-04-14T13:28:28.687182+00:00
Google Maps Scraper42## Hypothesis The `get_job_results` route builds a per-request cache key by concatenating `run_results_key(run_id) + f":{limit}:{offset}"` as a Python string operation on every request, but more importantly, when there's a cache miss it makes **two sequential Redis round-trips** — one `GET` for the results cache, then another full `get_run_status` call (which itself does a Redis `GET` plus potentially an Apify HTTP call) — before fetching dataset items. We can eliminate the redundant `get_run_sinvalidmsms%skip2026-04-14T13:27:59.872681+00:00
Google Maps Scraper41## Hypothesis The `get_job_status` route calls `cache.ttl(cache.run_status_key(run_id))` as a **separate Redis round-trip** after `service.get_run_status(run_id)` has already touched that same key. By pipelining the GET and TTL into a single Redis call using `redis.asyncio` pipeline, we eliminate one full network round-trip on every cache-hit status request, which is the hot path. ## Expected Impact The status endpoint is polled frequently (clients poll until a job completes). On cache hits, invalidmsms%skip2026-04-14T13:27:32.340877+00:00
Google Maps Scraper40## Hypothesis The `_wait_for_coalesced_result` method uses a fixed 0.5-second polling interval, meaning coalesced waiters always incur at least 500ms of unnecessary sleep before checking the cache. Replacing this with an adaptive backoff that starts at 50ms and grows to 500ms will dramatically reduce p95 latency for requests that hit the coalescing path, since the cache is typically populated within milliseconds of the leader finishing. ## Expected Impact - Coalesced requests currently wait ainvalidmsms%skip2026-04-14T13:27:18.651341+00:00
Google Maps Scraper39## Hypothesis The `RedisCache` class initializes without a connection pool size limit, so under concurrent load each coroutine contends for the default single connection. Configuring an explicit connection pool with `max_connections` and enabling `socket_keepalive` will reduce connection setup overhead and queuing latency for concurrent cache operations. ## Expected Impact Under p95 load, multiple concurrent requests hitting the cache simultaneously queue behind a single connection. By configinvalidmsms%skip2026-04-14T13:27:07.451175+00:00
Google Maps Scraper38## Hypothesis The `ApifyClient` instantiates a new `httpx.AsyncClient` (with connection setup overhead) on every `MapsService` construction, and `MapsService` itself is instantiated per-request via FastAPI's dependency injection. Making `ApifyClient` a singleton with a persistent, reused `httpx.AsyncClient` connection pool will eliminate repeated TCP/TLS handshake overhead on the p95 path. **Expected impact:** The `httpx.AsyncClient` maintains an internal connection pool that reuses keep-aliveinvalidmsms%skip2026-04-14T13:26:36.053924+00:00
Google Maps Scraper37## Hypothesis The `search_with_coalescing` method calls `request.model_dump(exclude_none=True)` twice — once to compute the payload hash/cache key and once inside `start_job` (which calls `request.model_dump(exclude_none=True)` again). This redundant serialization can be eliminated by computing the payload once and passing it through, saving CPU cycles on every cache-miss path. ## Expected Impact `model_dump` on a Pydantic v2 model with validation is not free — it involves field iteration, exinvalidmsms%skip2026-04-14T13:26:05.114916+00:00
Google Maps Scraper36## Hypothesis The `get_job_status` route makes a redundant second Redis round-trip to fetch the TTL (`cache.ttl(cache.run_status_key(run_id))`) after the service call already read the cached value — this adds a sequential network call on every cache-hit request. By pipelining or eliminating this extra TTL lookup (using the already-known TTL constants instead), we can reduce p95 latency on the hot `/scrape/status/{run_id}` path by one full Redis round-trip (~0.5–2ms per request). **Expected impinvalidmsms%skip2026-04-14T13:25:41.867646+00:00
Google Maps Scraper35## Hypothesis The `RedisCache.set` method calls `json.dumps` with `default=str` on every write, and `RedisCache.get` calls `json.loads` on every read — these are synchronous CPU operations that block the event loop. By using `ujson` (which is 2-5x faster than stdlib `json` for serialization/deserialization) for these hot-path operations, we can reduce the per-cache-operation overhead and lower p95 latency on cache-hit paths. ## Expected Impact Cache hits are the fast path for repeated requestinvalidmsms%skip2026-04-14T13:25:19.601955+00:00
Google Maps Scraper34## Hypothesis The `MapsService.__init__` creates a new `ApifyClient` instance on every request (since `MapsService` is instantiated per-request via `Depends(get_maps_service)`), and each `ApifyClient.__init__` calls `CircuitBreaker.get_instance()` which acquires a threading lock. By making `ApifyClient` a module-level singleton (initialized once at startup), we eliminate repeated object construction, lock acquisition, and the lazy `httpx.AsyncClient` recreation check on every request path. **Einvalidmsms%skip2026-04-14T13:24:54.921113+00:00
Google Maps Scraper33## Hypothesis The `RedisCache` class creates a new connection pool on every `connect()` call without configuring `max_connections`, defaulting to an unbounded pool that performs expensive connection setup per request under concurrency. By adding explicit connection pool settings (`max_connections`, `socket_keepalive=True`) and enabling `health_check_interval` to avoid stale connection overhead, we reduce the per-request connection acquisition latency at p95. ## Expected Impact Under concurreninvalidmsms%skip2026-04-14T13:24:43.688690+00:00
Google Maps Scraper32## Hypothesis The `ApifyClient` property `http` checks `self._http is None or self._http.is_closed` on every single request, and more critically, the `httpx.AsyncClient` is instantiated **per-`MapsService` instance** (which itself is created **per-request** via `Depends(get_maps_service)`). This means every request creates a fresh HTTP client with no connection pool reuse, incurring TCP handshake + TLS setup overhead on every upstream Apify call. ## Expected Impact By making the `httpx.AsyncCinvalidmsms%skip2026-04-14T13:24:04.124400+00:00
Google Maps Scraper31## Hypothesis The `_wait_for_coalesced_result` method in `MapsService` uses a fixed 0.5-second polling interval with `asyncio.sleep`, meaning coalesced requests always wait at least 500ms before checking the cache even if the leader completes in milliseconds. Reducing the initial poll interval with exponential backoff (starting at 50ms) will dramatically cut p95 latency for coalesced requests that resolve quickly. ## Expected Impact Coalesced requests currently incur a minimum 500ms penalty pinvalidmsms%skip2026-04-14T13:23:50.243376+00:00
Google Maps Scraper30## Hypothesis The `get_job_results` route performs two sequential Redis lookups when there's a cache miss: first checking the results cache, then calling `get_run_status` which performs another Redis lookup for the status cache. These can be parallelized with `asyncio.gather` to eliminate the serial round-trip penalty. ## Expected Impact On a cache miss for results (but cache hit for status), the current code serializes two Redis round-trips (~0.5–2ms each). By fetching both the results cacheinvalidmsms%skip2026-04-14T13:23:28.820111+00:00
Google Maps Scraper29## Hypothesis The `RedisCache.get` and `RedisCache.set` methods each call `json.loads`/`json.dumps` with Python's default JSON encoder/decoder, which is relatively slow for repeated serialization. Replacing these with `orjson` (a Rust-backed JSON library that is 2-10x faster) will reduce serialization overhead on every cache hit and miss, directly cutting p95 latency for cache-heavy paths like `get_job_status` and `get_job_results`. **Expected impact:** Every cache read (GET) and write (SET) cinvalidmsms%skip2026-04-14T13:23:03.163045+00:00
Google Maps Scraper28## Hypothesis The `invalidate_run_cache` route calls `cache.delete()` twice sequentially (two Redis round-trips), when both keys could be deleted in a single `DEL` command using Redis's built-in multi-key delete support, eliminating one full network round-trip to Redis. ## Expected Impact Each `cache.delete()` call is a separate Redis round-trip (~0.1–0.5ms each on localhost, potentially 1–5ms on a remote Redis). By issuing a single `DEL key1 key2` command, we halve the Redis round-trips for invalidmsms%skip2026-04-14T13:22:47.413790+00:00
Google Maps Scraper27## Hypothesis The `get_job_status` route performs two sequential Redis round-trips — one in `service.get_run_status()` (cache lookup) and a second `cache.ttl()` call afterward — where these could be combined into a single pipelined Redis command, but more impactfully: the TTL call uses a separately constructed key string (`cache.run_status_key(run_id)`) that is recomputed after the service already used it internally, adding redundant string construction overhead on every request. The real win iinvalidmsms%skip2026-04-14T13:22:16.559144+00:00
Google Maps Scraper26## Hypothesis The `MapsService` creates a new `ApifyClient` instance on every request (via `get_maps_service` dependency), and each `ApifyClient.__init__` calls `CircuitBreaker.get_instance()` which acquires a threading lock on every instantiation. More critically, the `httpx.AsyncClient` inside `ApifyClient` is lazily created per-instance rather than shared, meaning connection pooling is effectively defeated — each request gets a fresh client with no warm TCP/TLS connections to Apify. **Expecinvalidmsms%skip2026-04-14T13:21:45.177842+00:00
Google Maps Scraper25## Hypothesis The `ApifyClient` creates a new `httpx.AsyncClient` instance per-request when the client is closed or None (via the `http` property), and more critically, each `MapsService` instantiation creates a fresh `ApifyClient` — which happens on every request via `Depends(get_maps_service)`. This means no HTTP connection pooling is reused across requests, adding TCP handshake and TLS overhead to every upstream call. **Expected impact:** By sharing a single `ApifyClient` (and thus its `httinvalidmsms%skip2026-04-14T13:21:21.713174+00:00
Google Maps Scraper24## Hypothesis The `RedisCache` class uses `json.dumps` with `default=str` and `json.loads` on every cache operation, which adds unnecessary serialization overhead. Switching to `orjson` (which is significantly faster for both serialization and deserialization) will reduce the CPU time spent in cache get/set operations, directly lowering p95 latency for cache-hit paths. ## Expected Impact Cache hits dominate the hot path (status checks, coalesced requests, repeated searches). `orjson` is typicinvalidmsms%skip2026-04-14T13:20:58.980886+00:00
Google Maps Scraper23## Hypothesis The `invalidate_run_cache` route in `maps.py` calls `cache.delete()` twice sequentially (two separate Redis round-trips), and more broadly the `RedisCache` class has no pipeline/batch support, forcing every multi-key operation to pay two network RTTs. Converting the two `delete` calls into a single `delete` call (which `redis-py` supports with multiple keys) eliminates one Redis round-trip per cache invalidation request. ## Expected Impact Each Redis operation over a network coninvalidmsms%skip2026-04-14T13:20:40.600096+00:00
Google Maps Scraper22## Hypothesis The `ApifyClient._request` method creates a new `httpx.AsyncClient` on every request when `self._http` is `None` or closed, but since `ApifyClient` itself is instantiated fresh on every request (inside `MapsService.__init__`, called from `get_maps_service` dependency), the connection pool is never reused across requests — every upstream call pays a new TCP/TLS handshake cost. Making the `httpx.AsyncClient` a module-level singleton (initialized once at startup and shared across allinvalidmsms%skip2026-04-14T13:20:13.261193+00:00
Google Maps Scraper21## Hypothesis The `get_job_status` route makes two sequential Redis round-trips — one in `service.get_run_status()` and a second `cache.ttl()` call — when both could be retrieved in a single pipeline. By using a Redis pipeline to fetch the value and its TTL atomically, we eliminate one full network round-trip on every cached status request. ## Expected Impact On cache-hit paths (the common case after the first poll), p95 latency is dominated by Redis RTT overhead. Combining `GET` + `TTL` intoinvalidmsms%skip2026-04-14T13:19:48.138051+00:00
Google Maps Scraper20## Hypothesis The `_wait_for_coalesced_result` polling loop calls `asyncio.sleep(0.5)` and then `self._cache.get()` sequentially, meaning coalesced waiters always incur at least 500ms of unnecessary latency even when the leader completes in milliseconds. Replacing the Redis polling with an in-process `asyncio.Event` (keyed by payload hash) allows waiters to be notified immediately when the leader finishes, eliminating the polling delay entirely for the common case. ## Expected Impact For coalinvalidmsms%skip2026-04-14T13:19:17.339559+00:00
Google Maps Scraper19## Hypothesis The `get_job_status` route constructs a `MapsService` instance (which also constructs an `ApifyClient` instance) on every request via `Depends(get_maps_service)`, but then *also* separately injects `cache` via `Depends(get_cache)` — resulting in two dependency resolution chains both calling `get_cache`. The redundant double-call to `get_cache` is minor, but more importantly the `cache.ttl()` call is always made as a **separate sequential Redis round-trip after** the `get_run_statuinvalidmsms%skip2026-04-14T13:18:42.234580+00:00
Google Maps Scraper18## Hypothesis The `RedisCache.get()` and `RedisCache.set()` methods serialize/deserialize JSON on every cache operation using `json.loads`/`json.dumps` with a full Python call stack, but the `json.dumps` in `set()` uses `default=str` which forces a slower path for every value. More importantly, `hash_payload` recomputes `json.dumps(payload, sort_keys=True)` + SHA-256 on every single `search_with_coalescing` call even though the same payload object was just dumped via `model_dump()` — this meansinvalidmsms%skip2026-04-14T13:18:08.270210+00:00
Google Maps Scraper17## Hypothesis The `RedisCache` connection is created with default `aioredis.from_url()` parameters that don't configure connection pool size, meaning under concurrent load the pool may bottleneck. More critically, the `health_check` endpoint imports `ApifyClient` and `CircuitBreaker` **inside the function body** on every call, incurring repeated module-attribute lookup overhead — but the more impactful issue is that `get_run_status` in the route layer calls `cache.ttl()` as a **separate sequentinvalidmsms%skip2026-04-14T13:17:31.231735+00:00
Google Maps Scraper16## Hypothesis The `_wait_for_coalesced_result` method uses a fixed 0.5-second polling interval, meaning coalesced requests always wait at least 500ms before seeing a result that may already be available in Redis. Replacing the polling loop with asyncio-native signaling using an in-process `asyncio.Event` (stored in `_coalesce_events` on the service, which already exists but is unused) will reduce coalesced-request latency from O(poll_interval) to near-zero, directly cutting p95 latency for duplinvalidmsms%skip2026-04-14T13:17:04.510179+00:00
Google Maps Scraper15## Hypothesis The `get_job_status` route performs two sequential Redis round-trips — one in `service.get_run_status()` and a separate `cache.ttl()` call — when a single Redis `GET` + `TTL` pipeline could retrieve both atomically. By pipelining these two commands, we eliminate one full network round-trip on every cached status hit, which dominates p95 latency for this high-frequency polling endpoint. ## Expected Impact For cached responses (the common case after the first fetch), the route curinvalidmsms%skip2026-04-14T13:16:29.178797+00:00
Google Maps Scraper14## Hypothesis The `MapsService` class creates a new `ApifyClient` instance on every request (since `MapsService` itself is instantiated per-request via `Depends(get_maps_service)`), and each `ApifyClient.__init__` sets `self._http = None`, meaning the `httpx.AsyncClient` is also recreated on every request — discarding the connection pool and forcing a fresh TCP/TLS handshake to Apify on every upstream call. By making `ApifyClient` a singleton (or storing it on `app.state` at startup alongside tinvalidmsms%skip2026-04-14T13:16:04.248463+00:00
Google Maps Scraper13## Hypothesis The `search_with_coalescing` method calls `request.model_dump(exclude_none=True)` twice — once to compute the payload hash and build cache keys, and again inside `start_job()` which calls `request.model_dump(exclude_none=True)` a second time — wasting CPU on redundant Pydantic serialization. By computing the payload dict once and passing it through, we eliminate the duplicate serialization overhead on every cache-miss path. ## Expected Impact `model_dump` on a Pydantic v2 model invalidmsms%skip2026-04-14T13:15:46.766158+00:00
Google Maps Scraper12## Hypothesis The `httpx.AsyncClient` inside `ApifyClient` is recreated on every request because `ApifyClient` itself is instantiated fresh per-request (in `get_maps_service` dependency), meaning each request pays TCP connection establishment and TLS handshake overhead against the Apify API. Lifting a single shared `httpx.AsyncClient` with a connection pool to application lifespan (alongside the Redis cache) will eliminate this per-request connection setup cost. ## Expected Impact - **p95 latinvalidmsms%skip2026-04-14T13:15:07.852185+00:00
Google Maps Scraper11## Hypothesis The `ApifyClient` is instantiated fresh on every request inside `MapsService.__init__`, which is itself constructed per-request via FastAPI's `Depends(get_maps_service)`. This means every request creates a new `httpx.AsyncClient` object (and potentially a new TCP connection to Apify) rather than reusing a persistent connection pool. By making `ApifyClient` a module-level singleton with a single shared `httpx.AsyncClient`, we eliminate repeated connection setup overhead and enable invalidmsms%skip2026-04-14T13:14:31.979757+00:00
Google Maps Scraper10## Hypothesis The `invalidate_run_cache` route calls `cache.delete()` twice sequentially with two separate Redis round-trips. Replacing these with a single `UNLINK` (or `DELETE` with multiple keys) pipeline call reduces the number of Redis round-trips from 2 to 1, cutting latency for cache invalidation requests. More broadly, the `get_job_status` route also makes two sequential Redis calls (`cache.get` inside `service.get_run_status` then `cache.ttl` in the route handler) that could be pipelineinvalidmsms%skip2026-04-14T13:13:45.842271+00:00
Google Maps Scraper9## Hypothesis The `get_job_results` route calls `service.get_run_results()` which always performs a sequential `get_run_status()` fetch (adding a Redis round-trip or full Apify HTTP call) before fetching dataset items, even when the results cache key itself is a miss but the status is already cached. More critically, the `get_job_status` route makes **two separate sequential Redis calls** — one `GET` in `get_run_status()` and then a second `TTL` call in the route handler — that can be collapsedinvalidmsms%skip2026-04-14T13:13:17.319198+00:00
Google Maps Scraper8## Hypothesis The `get_job_status` route makes two sequential Redis round-trips — one inside `service.get_run_status()` and a second `cache.ttl()` call — where both could be satisfied in a single Redis pipeline, saving one full network RTT on every cached status response. **Expected impact:** For the common hot path (cached status lookup), p95 latency drops by one Redis round-trip (~0.5–2ms per trip locally, higher in cloud deployments). The fix is to use a Redis pipeline to atomically issue `invalidmsms%skip2026-04-14T13:12:48.880384+00:00
Google Maps Scraper7## Hypothesis The `RedisCache` methods `get`, `set`, `delete`, and `ttl` each make individual round-trip calls to Redis. In the hot path for `get_job_results`, there are **three sequential Redis operations** (`get` cache, `get` run_status cache, `set` result cache) plus additional calls in `get_run_status` — these serial round-trips add up at p95. By pipelining the two `delete` calls in `invalidate_run_cache` and, more impactfully, combining the `acquire_coalesce_lock` + `get search_cache_key` invalidmsms%skip2026-04-14T13:12:16.783606+00:00
Google Maps Scraper6## Hypothesis The `ApifyClient` is instantiated fresh on every request (inside `MapsService.__init__`, which is called per-request via `Depends(get_maps_service)`), creating a new `httpx.AsyncClient` on each call. This defeats HTTP/1.1 keep-alive connection reuse and TCP connection pooling, adding TCP handshake latency to every upstream call. **Fix:** Instantiate a single shared `ApifyClient` (and its underlying `httpx.AsyncClient`) at application startup and store it on `app.state`, then injeinvalidmsms%skip2026-04-14T13:11:53.510817+00:00
Google Maps Scraper5## Hypothesis The `_wait_for_coalesced_result` polling loop uses a fixed 0.5-second sleep interval, meaning coalesced waiters incur up to 500ms of unnecessary latency even when the primary request completes in milliseconds. Replacing the polling approach with an `asyncio.Event` for in-process coordination (falling back to short-interval Redis polling only when needed) will dramatically reduce the wait time for coalesced requests. ## Expected Impact When multiple identical requests arrive simuinvalidmsms%skip2026-04-14T13:11:14.796531+00:00
Google Maps Scraper4## Hypothesis The `_wait_for_coalesced_result` polling loop uses a fixed 0.5-second sleep interval, meaning coalesced waiters incur up to 500ms of unnecessary latency per poll cycle. Replacing the Redis polling loop with an asyncio `Event`-based notification (where the leader signals waiters in-process via `self._coalesce_events`) will reduce the wait latency for coalesced requests from O(poll_interval) to near-zero. ## Expected Impact Under concurrent identical requests, all non-leader corouinvalidmsms%skip2026-04-14T13:10:30.175044+00:00
Google Maps Scraper3## Hypothesis The `ApifyClient` is instantiated fresh on every request (inside `MapsService.__init__`, which itself is created per-request via `Depends(get_maps_service)`), meaning a new `httpx.AsyncClient` is created (and potentially torn down) for every request. By promoting the `ApifyClient` to application-level singleton state (stored on `app.state` during lifespan startup, alongside the cache), the same `httpx.AsyncClient` with its underlying connection pool will be reused across all requeinvalidmsms%skip2026-04-14T13:10:05.993392+00:00
Google Maps Scraper2## Hypothesis The `ApifyClient` is instantiated fresh on every request (inside `MapsService.__init__`, which is called per-request via FastAPI's `Depends`), meaning the underlying `httpx.AsyncClient` and its connection pool are torn down and recreated on each request, eliminating HTTP connection reuse and adding TCP/TLS handshake overhead on every upstream call. By making `ApifyClient` a module-level singleton (or attaching it to `app.state` at startup alongside the cache), the `httpx.AsyncCliinvalidmsms%skip2026-04-14T13:09:35.595982+00:00
Google Maps Scraper1## Hypothesis The `ApifyClient` lazily creates a new `httpx.AsyncClient` instance per request (when `self._http is None or self._http.is_closed`) but each `MapsService` instantiation creates a fresh `ApifyClient`, meaning every request gets a new HTTP client with no connection pool reuse — paying TCP+TLS handshake overhead on every upstream call. Making `ApifyClient` a module-level singleton with a persistent, pre-warmed `httpx.AsyncClient` will allow connection reuse across requests, eliminatinvalidmsms%skip2026-04-14T13:08:53.191453+00:00
Google Maps Scraper0## Hypothesis The `ApifyClient` lazily creates a new `httpx.AsyncClient` instance per-request check (every time `.http` property is accessed, it checks if `_http is None or self._http.is_closed`), but more critically, a **new `ApifyClient` (and thus a new `httpx.AsyncClient`) is instantiated on every request** via `MapsService.__init__` — which is itself instantiated on every request via FastAPI's `Depends(get_maps_service)`. This means no HTTP connection pooling is happening across requests, finvalidmsms%skip2026-04-14T13:08:19.178342+00:00
Fast Instagram Profile Stats Checker49## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` to check/transition state, then releases it before calling the upstream function, then re-acquires it in `_on_success`/`_on_failure`. For the common **CLOSED** state (no failures), this lock acquisition is pure overhead on every single request — but the real problem is that in `_on_success` and `_on_failure`, the lock is acquired again, serializing all concurrent requests through two lock acquisitions per call. We can replacinvalidmsms%skip2026-04-12T13:09:51.712150+00:00
Fast Instagram Profile Stats Checker48## Hypothesis The `ApifyService` instance is recreated on every request via `Depends(_get_apify_service)`, which also means the `httpx.AsyncClient` is never truly reused across requests (it's created lazily per-service instance and discarded). By storing a single shared `ApifyService` instance on `app.state` at startup (like the cache and circuit breaker), we eliminate per-request object allocation and ensure the `httpx.AsyncClient` connection pool is genuinely reused across all requests, reducinvalidmsms%skip2026-04-12T13:09:35.847525+00:00
Fast Instagram Profile Stats Checker47## Hypothesis The `wait_for_cache` polling loop uses a fixed `asyncio.sleep(0.5)` interval, meaning coalesced requests wait up to 500ms longer than necessary after the in-flight result is written. Replacing the polling loop with a Redis pub/sub or `WAIT`-based notification — or at minimum using an in-process `asyncio.Event` for requests coalesced within the same process — would allow waiting requests to be unblocked immediately when the result is available, reducing p95 latency for cache-coalesinvalidmsms%skip2026-04-12T13:08:43.526540+00:00
Fast Instagram Profile Stats Checker46## Hypothesis The `CircuitBreaker._lock` is held during the entire upstream `func(*args, **kwargs)` call (because `_on_success` and `_on_failure` each re-acquire it, but the real issue is that the lock check at the top of `call()` serializes all concurrent requests through a single `asyncio.Lock` even in the common CLOSED state). However, looking more carefully, the actual bottleneck is that **`ApifyService` is instantiated fresh on every request** via `Depends(_get_apify_service)`, which meansinvalidmsms%skip2026-04-12T13:08:25.310546+00:00
Fast Instagram Profile Stats Checker45## Hypothesis The `ApifyService` is instantiated on every request via `Depends(_get_apify_service)`, which creates a new object (and thus a new `httpx.AsyncClient`) for every request. By creating a single shared `ApifyService` instance at application startup (stored on `app.state`) and reusing it across all requests, we eliminate repeated client construction overhead and — more importantly — enable true connection pooling across requests, reducing TCP handshake and TLS negotiation costs on the invalidmsms%skip2026-04-12T13:08:05.596629+00:00
Fast Instagram Profile Stats Checker44## Hypothesis The `_make_cache_id` function rebuilds a sorted list, serializes it with `json.dumps`, and computes a `sha256` hash on every request — but for the common single-URL case (and small URL sets), a much faster path exists: skip `json.dumps` entirely and hash the pre-joined sorted URL string directly, saving the JSON serialization overhead and reducing object allocations. **Expected impact:** For cache-hit paths (the majority of p95 traffic), `_make_cache_id` is called on every requesinvalidmsms%skip2026-04-12T13:07:51.546426+00:00
Fast Instagram Profile Stats Checker43## Hypothesis The `wait_for_cache` polling loop in `RedisCache` uses a fixed 0.5-second sleep interval, meaning coalesced requests always wait at least 500ms before getting a result even if the upstream call completes in milliseconds. Replacing the fixed polling with Redis `SUBSCRIBE`/`PUBLISH` (pub/sub) so the primary request publishes a notification when it populates the cache, and waiting requests wake up immediately rather than sleeping up to 500ms per cycle, will significantly reduce p95 linvalidmsms%skip2026-04-12T13:07:11.226316+00:00
Fast Instagram Profile Stats Checker42## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` at the start to check/transition state, but then releases it before calling `func`. Meanwhile, `_on_success` and `_on_failure` each re-acquire `self._lock` with `async with self._lock`. This means every request serializes through the lock **twice** (once for state check, once for success/failure accounting). We can eliminate lock contention on the hot path by using a plain `asyncio.Lock`-free approach with an atomic state reinvalidmsms%skip2026-04-12T13:06:07.806444+00:00
Fast Instagram Profile Stats Checker41## Hypothesis The `ApifyService._get_client()` method creates a new `httpx.AsyncClient` on every request because the service itself is instantiated per-request via `Depends(_get_apify_service)`, meaning the client is never actually reused and TCP connection pooling provides no benefit. By storing a single shared `httpx.AsyncClient` as application state (initialized at startup), all requests will reuse pooled TCP/TLS connections to the Apify API, eliminating repeated connection establishment oveinvalidmsms%skip2026-04-12T13:05:42.991103+00:00
Fast Instagram Profile Stats Checker40## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` during the state-check phase but then releases it before calling `func`, meaning every concurrent request must serialize through this lock even during normal CLOSED state operation. Replacing the lock-based state check with a lockless read using `asyncio.Lock` only for state *transitions* (writes) will eliminate unnecessary serialization of concurrent requests through the circuit breaker. ## Expected Impact In normal operainvalidmsms%skip2026-04-12T13:05:21.859431+00:00
Fast Instagram Profile Stats Checker39## Hypothesis The `_make_cache_id` function calls `json.dumps` and `hashlib.sha256` on every request, but the real bottleneck is that `RedisCache.get` performs a single synchronous Redis round-trip on every cache check — and for the coalescing path, `wait_for_cache` burns 0.5s sleep intervals. However, the most impactful quick win is that **the `CircuitBreaker.call()` acquires `self._lock` for the state-check phase, then releases it before calling `func`, meaning all concurrent requests that painvalidmsms%skip2026-04-12T13:04:56.762161+00:00
Fast Instagram Profile Stats Checker38## Hypothesis The `httpx.AsyncClient` is created per-request (since `ApifyService` is instantiated fresh on every request via `Depends(_get_apify_service)`), meaning every upstream call pays TCP connection establishment and TLS handshake overhead. Sharing a single `httpx.AsyncClient` with a connection pool at the application level — initialized once at startup and injected into `ApifyService` — will eliminate repeated connection setup costs and reduce p95 latency for cache-miss requests. ## Exinvalidmsms%skip2026-04-12T13:04:22.997269+00:00
Fast Instagram Profile Stats Checker37## Hypothesis The `ApifyService` is instantiated on every request via `Depends(_get_apify_service)`, which creates a new `httpx.AsyncClient` per-request (since `_http_client` starts as `None` on each new instance). This means every request pays TCP connection establishment and TLS handshake overhead to Apify, adding significant latency to the p95 path. **Expected Impact:** By storing a single `ApifyService` instance on `app.state` at startup (with a persistent `httpx.AsyncClient` using a conneinvalidmsms%skip2026-04-12T13:04:02.148384+00:00
Fast Instagram Profile Stats Checker36## Hypothesis The `CircuitBreaker` uses a single `asyncio.Lock()` that serializes all concurrent requests during the state-check phase of `call()`, even when the circuit is `CLOSED` and no state transition is needed. Replacing the lock-based state check with a lockless read (using only the lock for writes/transitions) will eliminate contention under high concurrency, reducing p95 latency for the common CLOSED-circuit path. **Expected impact:** Under concurrent load, every request currently waiinvalidmsms%skip2026-04-12T13:03:41.500885+00:00
Fast Instagram Profile Stats Checker35## Hypothesis The `RedisCache.wait_for_cache()` polling loop uses a fixed 0.5-second sleep interval, which means coalesced requests waiting for an in-flight upstream call (which can take up to 120 seconds) will have up to 500ms of unnecessary extra latency beyond when the result actually becomes available. Replacing the fixed 0.5s sleep with an adaptive exponential backoff starting at 50ms (capped at 500ms) will reduce the average and p95 wait time for coalesced requests by hundreds of millisecinvalidmsms%skip2026-04-12T13:03:28.193708+00:00
Fast Instagram Profile Stats Checker34## Hypothesis The `ApifyService` is instantiated on every request via `Depends(_get_apify_service)`, which creates a new object (and potentially a new `httpx.AsyncClient`) on each call. By storing a single shared `ApifyService` instance on `app.state` at startup (alongside the cache and circuit breaker), we eliminate per-request object construction overhead and guarantee the `httpx.AsyncClient` connection pool is truly reused across all requests. ### Expected Impact The `httpx.AsyncClient` mainvalidmsms%skip2026-04-12T13:03:09.994945+00:00
Fast Instagram Profile Stats Checker33## Hypothesis The `CircuitBreaker.call()` method holds `self._lock` only during the state-check phase, but `_on_success()` and `_on_failure()` each re-acquire the same lock — causing two additional lock round-trips per request. Since the circuit breaker is almost always in `CLOSED` state, we can eliminate the locking overhead entirely for the hot path by using a lock-free fast path with atomic reads, reserving locking only for state transitions. Specifically: in `CLOSED` state the current codeinvalidmsms%skip2026-04-12T13:02:31.331826+00:00
Fast Instagram Profile Stats Checker32## Hypothesis The `ApifyService` creates a new `httpx.AsyncClient` instance per-request (via `Depends(_get_apify_service)` which calls `ApifyService(cache=...)` fresh each time), meaning TCP connections to Apify are never reused across requests. By storing a single shared `httpx.AsyncClient` on `app.state` (created at startup and reused), we eliminate TCP handshake and TLS negotiation overhead on every upstream call, which dominates p95 latency for cache-miss paths. ## Expected Impact HTTP coinvalidmsms%skip2026-04-12T13:02:02.333340+00:00
Fast Instagram Profile Stats Checker31## Hypothesis The `wait_for_cache` polling loop in `RedisCache.wait_for_cache()` uses a fixed 500ms sleep interval, causing coalesced requests to wait up to 500ms longer than necessary after the primary request completes. Replacing the fixed polling with an adaptive backoff that starts at 50ms and caps at 500ms will reduce the median wait time for coalesced requests by ~200-400ms, directly cutting p95 latency when concurrent duplicate requests are common. ## Expected Impact When multiple requinvalidmsms%skip2026-04-12T13:01:49.701206+00:00
Fast Instagram Profile Stats Checker30## Hypothesis The `ApifyService` is instantiated on every request via `Depends(_get_apify_service)`, which creates a new `httpx.AsyncClient` lazily on first use but then discards it at request end — meaning every request that misses the cache pays TCP connection establishment overhead to Apify. By storing a single shared `ApifyService` instance (with a persistent `httpx.AsyncClient`) on `app.state` at startup, we eliminate per-request connection setup and amortize TLS handshake costs across allinvalidmsms%skip2026-04-12T13:01:29.211520+00:00
Fast Instagram Profile Stats Checker29## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` as an `async with` block just to read the circuit state, serializing all concurrent requests through a single lock even when the circuit is CLOSED (the common fast path). Replacing the state-check lock acquisition with a lockless read of `self._state` (since Python GIL makes simple attribute reads atomic) and only using the lock for state *transitions* will eliminate this contention bottleneck on the hot path. **Expected iminvalidmsms%skip2026-04-12T13:01:04.080636+00:00
Fast Instagram Profile Stats Checker28## Hypothesis The `RedisCache` connection is created with default connection pool settings, which means each Redis operation may incur connection establishment overhead under concurrent load. Explicitly configuring the connection pool with a larger `max_connections` limit and enabling `socket_keepalive` will reduce per-operation latency by ensuring connections are reused from a pre-warmed pool rather than being established on-demand. ## Expected Impact Under concurrent requests, Redis operatiinvalidmsms%skip2026-04-12T13:00:44.190127+00:00
Fast Instagram Profile Stats Checker27## Hypothesis The `_make_cache_id` function calls `json.dumps` + `hashlib.sha256` on every request, but more critically, the `ApifyService` is instantiated via `Depends(_get_apify_service)` on every request, creating a new object (and resetting `_http_client` to `None`) each time — meaning `_get_client()` always creates a new `httpx.AsyncClient` instead of reusing a persistent connection pool. Storing a single shared `ApifyService` instance on `app.state` (initialized once at startup with a perinvalidmsms%skip2026-04-12T13:00:25.351311+00:00
Fast Instagram Profile Stats Checker26## Hypothesis The `CircuitBreaker._lock` is an `asyncio.Lock()` created at instantiation time, but the circuit breaker is shared across all requests via `app.state`. The lock is acquired **twice** per successful request (once in `call()` for state check, once in `_on_success()`), serializing all concurrent requests through a single contention point — even under normal CLOSED state operation where no state mutation is needed. The fix is to skip lock acquisition entirely when the circuit is alreinvalidmsms%skip2026-04-12T13:00:05.393550+00:00
Fast Instagram Profile Stats Checker25## Hypothesis The `httpx.AsyncClient` in `ApifyService` is recreated per-request (since `ApifyService` itself is instantiated per-request via `Depends(_get_apify_service)`), meaning no HTTP connection pooling occurs across requests. By making the `httpx.AsyncClient` a shared, application-level singleton stored in `app.state`, TCP connections to Apify's API can be reused, eliminating TLS handshake and TCP setup overhead on the critical path. ## Expected Impact For cache misses that require an invalidmsms%skip2026-04-12T12:59:25.138675+00:00
Fast Instagram Profile Stats Checker24## Hypothesis The `ApifyService` is instantiated on every request via the `Depends(_get_apify_service)` dependency, which recreates the object (and loses the cached `_http_client`) on each call. Moving the `ApifyService` instance to `app.state` during startup (alongside the cache and circuit breaker) ensures the `httpx.AsyncClient` connection pool is truly shared and reused across all requests, eliminating repeated client initialization overhead and TCP connection setup costs. **Expected impacinvalidmsms%skip2026-04-12T12:59:08.132317+00:00
Fast Instagram Profile Stats Checker23## Hypothesis The `wait_for_cache` method in `RedisCache` uses a Python-level polling loop with `asyncio.sleep(0.5)` intervals, but for the coalescing case, a more efficient approach would be to use Redis `SUBSCRIBE`/`PUBLISH` (pub/sub) to get immediate notification when the in-flight result is ready. However, since many prior iterations were reverted, a safer and still impactful change is to reduce the polling interval from 0.5s to 0.1s and add an early-exit Redis pipeline that combines the loinvalidmsms%skip2026-04-12T12:58:22.852951+00:00
Fast Instagram Profile Stats Checker22## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` as an `async with` block during the state-check phase, but this lock is a **coroutine-level asyncio.Lock** that serializes all concurrent requests through a single chokepoint — even when the circuit is CLOSED and no state mutation is needed. Replacing the initial read-only state check with a lockless fast-path (checking `self._state` without acquiring the lock, only locking for mutations in `_on_success`/`_on_failure`) will invalidmsms%skip2026-04-12T12:58:02.553887+00:00
Fast Instagram Profile Stats Checker21## Hypothesis The `ApifyService._get_client()` method checks `self._http_client is None or self._http_client.is_closed` on every request, but since `ApifyService` is instantiated fresh per-request (via `_get_apify_service` dependency), the client is **always** `None` and a new `httpx.AsyncClient` is created every time — incurring TCP connection establishment overhead on every upstream call. Storing the `httpx.AsyncClient` as application state (initialized once at startup) and injecting it into invalidmsms%skip2026-04-12T12:57:37.930177+00:00
Fast Instagram Profile Stats Checker20## Hypothesis The `RedisCache.connect()` method creates the Redis client with default connection pool settings, meaning every concurrent request may contend on pool exhaustion or connection creation overhead. By pre-configuring the connection pool with an explicit `max_connections` limit and enabling `health_check_interval`, we reduce the latency spike at p95 caused by pool starvation under concurrent load. ### Expected Impact Under concurrent requests, the default pool (10 connections) can binvalidmsms%skip2026-04-12T12:57:23.573583+00:00
Fast Instagram Profile Stats Checker19## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` twice per successful request (once at the start to check state, and once inside `_on_success`), and each acquisition requires an async context switch. Since the CLOSED state check is a pure in-memory read that doesn't need locking for correctness under normal (non-transitioning) operation, we can replace the lock-guarded state check with a lockless fast-path that only falls back to the lock when the state is not CLOSED. **Einvalidmsms%skip2026-04-12T12:57:03.700847+00:00
Fast Instagram Profile Stats Checker18## Hypothesis The `uuid.uuid4()` call in the route handler, combined with multiple `str()` conversions and `logger.info()` format string evaluations on every request, adds measurable overhead. However, the more impactful issue is that `_make_cache_id` performs `json.dumps` + SHA-256 hashing on every request — for the common single-URL case, this can be replaced with a direct string operation that avoids both the JSON serialization and the full cryptographic hash, reducing CPU overhead on the hoinvalidmsms%skip2026-04-12T12:56:39.935093+00:00
Fast Instagram Profile Stats Checker17## Hypothesis The `SuccessResponse` model uses `List[InstagramProfileStats]` with `model_config = {"extra": "allow"}`, causing Pydantic to perform full field-by-field validation and model instantiation for every profile in the response. By bypassing the intermediate `InstagramProfileStats(**p)` construction in the route handler and instead passing raw dicts directly, then using `model_construct` (which skips validation) for the response assembly, we can eliminate redundant deserialization/re-seinvalidmsms%skip2026-04-12T12:56:22.811923+00:00
Fast Instagram Profile Stats Checker16## Hypothesis The `CircuitBreaker.call()` method holds `self._lock` only during the state-check phase, but then calls `await func(*args, **kwargs)` (the full Apify upstream call, potentially 120s) **outside** the lock — however, `_on_success` and `_on_failure` each re-acquire the same lock. The real bottleneck is that for **cache-hit requests**, the flow still goes through `cb.call()` → `service.get_profile_stats()` which acquires the circuit breaker lock twice (success/failure path) adding unninvalidmsms%skip2026-04-12T12:55:49.848939+00:00
Fast Instagram Profile Stats Checker15## Hypothesis The `_make_cache_id` function calls `json.dumps` followed by `hashlib.sha256` on every request, but more critically, the Redis `get` operation in `get_profile_stats` and the subsequent `set_lock` operation are two separate round-trips to Redis on every cache miss — these can be collapsed by using a Redis pipeline/transaction, but a simpler and higher-impact fix is to add an **in-process LRU cache** (using `functools.lru_cache` or a small `dict` with TTL) in front of the Redis cachinvalidmsms%skip2026-04-12T12:54:59.451624+00:00
Fast Instagram Profile Stats Checker14## Hypothesis The `httpx.AsyncClient` is recreated on every request because `ApifyService` is instantiated fresh per request via the `Depends(_get_apify_service)` factory, meaning each upstream call pays TCP connection establishment and TLS handshake overhead. Sharing a single `httpx.AsyncClient` (with a connection pool) at the application level will eliminate repeated connection setup costs and reduce p95 latency for cache-miss requests. ## Explanation Every request that misses the cache calinvalidmsms%skip2026-04-12T12:54:31.554286+00:00
Fast Instagram Profile Stats Checker13## Hypothesis The `_get_apify_service` dependency creates a new `ApifyService` instance on every request, which means a new `httpx.AsyncClient` is created lazily on the first HTTP call within each request (and never connection-pooled across requests). Storing a single shared `ApifyService` instance in `app.state` and reusing it across all requests will enable persistent HTTP connection pooling to the Apify API, eliminating TCP handshake and TLS negotiation overhead on every upstream call. ## Einvalidmsms%skip2026-04-12T12:53:58.340428+00:00
Fast Instagram Profile Stats Checker12## Hypothesis The `wait_for_cache` polling loop uses `asyncio.sleep(0.5)` with a fixed 50-iteration budget, meaning coalesced requests always wait at least 500ms before getting their result even if the in-flight request completes in 100ms. Switching to exponential backoff starting at 50ms (capped at 500ms) will dramatically reduce the median and p95 latency for coalesced requests. ## Expected Impact For coalesced requests (cache miss with a concurrent in-flight call), the current implementatiinvalidmsms%skip2026-04-12T12:53:42.200045+00:00
Fast Instagram Profile Stats Checker11## Hypothesis The `ApifyService` is instantiated fresh on every request via the `_get_apify_service` dependency, creating a new `httpx.AsyncClient` on each call (since `self._http_client` is always `None` on a new instance). Storing a single shared `ApifyService` instance on `app.state` at startup — like the cache and circuit breaker — eliminates repeated client construction overhead and enables genuine connection reuse across requests, reducing p95 latency for both cache-hit and cache-miss patinvalidmsms%skip2026-04-12T12:53:09.944708+00:00
Fast Instagram Profile Stats Checker10## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` for a state-check at the start, then releases it, then calls `func`, then re-acquires the lock in `_on_success`/`_on_failure` — but the lock is `asyncio.Lock` which serializes all concurrent requests through the circuit breaker sequentially during the state-check phase. Since the circuit is almost always CLOSED, we can replace the lock-based state check with a **lock-free read** (using a plain attribute read, which is safe iinvalidmsms%skip2026-04-12T12:52:45.863549+00:00
Fast Instagram Profile Stats Checker9## Hypothesis The `ApifyService` is instantiated fresh on every request via the `_get_apify_service` dependency function, which means the `httpx.AsyncClient` is also recreated each time (since `_http_client` starts as `None` on every new `ApifyService` instance). This causes TCP connection setup overhead on every request; sharing a single `ApifyService` instance (with its persistent `httpx.AsyncClient` connection pool) at the application level will eliminate redundant connection establishment ainvalidmsms%skip2026-04-12T12:52:29.356275+00:00
Fast Instagram Profile Stats Checker8## Hypothesis The `CircuitBreaker` uses a single `asyncio.Lock` that serializes all concurrent requests through a bottleneck: every request acquires `_lock` twice (once for state-check, once for `_on_success`/`_on_failure`), meaning under load, requests queue up waiting for the lock even when the circuit is CLOSED and no state change is needed. Replacing the coarse lock with an `asyncio.Lock` only on actual state transitions (using a simple read-check without locking for the hot CLOSED path) wiinvalidmsms%skip2026-04-12T12:52:00.409211+00:00
Fast Instagram Profile Stats Checker7## Hypothesis The `wait_for_cache` polling loop uses a fixed 500ms sleep interval, meaning coalesced requests that finish quickly (e.g., in 1-2 seconds) still wait up to 500ms after the result is available. Switching to exponential backoff starting at 50ms (capped at 500ms) will reduce the average and p95 wait time for coalesced requests significantly. ## Expected Impact For coalesced requests where the in-flight Apify call completes in ~2-5 seconds, the current implementation adds up to 500minvalidmsms%skip2026-04-12T12:51:46.458908+00:00
Fast Instagram Profile Stats Checker6## Hypothesis The `ApifyService` is instantiated fresh on every request (via `Depends(_get_apify_service)`), which means `_http_client` starts as `None` each time and a new `httpx.AsyncClient` is created per request — losing connection pooling benefits. Moving the `ApifyService` (with its shared `httpx.AsyncClient`) to app state at startup will reuse TCP connections across requests, eliminating TLS handshake and connection setup overhead on every call. **Expected impact:** For cache-miss requeinvalidmsms%skip2026-04-12T12:51:29.320252+00:00
Fast Instagram Profile Stats Checker5## Hypothesis The `CircuitBreaker.call()` method holds `self._lock` only during state-checking, but `_on_success()` and `_on_failure()` each re-acquire the same lock — this is correct, but the bigger issue is that the circuit breaker wraps the entire `service.get_profile_stats` call (including the long Apify HTTP call) while the lock is **not** held during execution. However, the real latency problem is that `ApifyService` creates a **new `httpx.AsyncClient` on every request** because the serviinvalidmsms%skip2026-04-12T12:50:40.606248+00:00
Fast Instagram Profile Stats Checker4## Hypothesis The `wait_for_cache` polling loop in `cache.py` uses a fixed 500ms sleep interval, meaning coalesced requests wait an average of 250ms extra after the primary request completes before receiving the cached result. Reducing the polling interval to 50ms will dramatically cut the tail latency for coalesced (cache-miss) requests without meaningful overhead. ### Expected Impact For any request that hits the coalescing path (lock not acquired), the current implementation polls every 50invalidmsms%skip2026-04-12T12:50:26.608625+00:00
Fast Instagram Profile Stats Checker3## Hypothesis The `ApifyService` instantiates a new `httpx.AsyncClient` on every request (via `_get_client()` creating a fresh client when closed/None), losing connection pooling and incurring TCP handshake + TLS overhead on each upstream call. Moving to a single shared `httpx.AsyncClient` created at application startup and injected via `app.state` will reuse persistent connections to Apify's servers, eliminating repeated TCP+TLS setup costs that dominate p95 latency for cache-miss requests. #invalidmsms%skip2026-04-12T12:49:52.752625+00:00
Fast Instagram Profile Stats Checker2## Hypothesis The `CircuitBreaker.call()` method acquires `self._lock` for the state-check at the top, then releases it, then acquires it **again** inside `_on_success()` and `_on_failure()` — but more critically, the lock is held during the **entire upstream HTTP call** in the `HALF_OPEN` state check path, and the `asyncio.Lock` is re-entrant-unsafe, meaning concurrent requests all serialize through a single lock even in the normal `CLOSED` path where no mutation is needed. For the common hot invalidmsms%skip2026-04-12T12:49:25.530400+00:00
Fast Instagram Profile Stats Checker1## Hypothesis The `ApifyService` creates a new `httpx.AsyncClient` per-request (inside `_get_client()`, which checks `is_closed` but constructs a fresh client on every cold path) rather than sharing a single persistent client across all requests. By instantiating one `httpx.AsyncClient` at application startup (stored on `app.state`) with connection pooling enabled, we eliminate repeated TCP handshake and TLS negotiation overhead on every upstream call, directly reducing p95 latency for cache-miinvalidmsms%skip2026-04-12T12:48:53.498976+00:00
Fast Instagram Profile Stats Checker0## Hypothesis The `ApifyService` creates a new `httpx.AsyncClient` per-request (or per service instance that is itself created per-request via `Depends(_get_apify_service)`), meaning TCP connection establishment and TLS handshake overhead is paid on every upstream call. By moving the `httpx.AsyncClient` to application lifespan state (shared, persistent), connection pooling will be used across requests, eliminating repeated TCP/TLS setup and reducing p95 latency by hundreds of milliseconds on cainvalidmsms%skip2026-04-12T12:48:15.059762+00:00
Facebook page posts checker49## Hypothesis The `RedisCache.connect()` method creates the Redis client with default connection pool settings, meaning every burst of concurrent requests may queue waiting for a single connection. Increasing the connection pool size (via `max_connections`) will reduce p95 latency under load by allowing multiple Redis operations to execute truly in parallel rather than serializing behind a single pooled connection. ### Expected Impact Under concurrent load, multiple coroutines calling `cache.invalidmsms%skip2026-04-09T14:26:12.170259+00:00
Facebook page posts checker48## Hypothesis The `ApifyClient` creates a new `httpx.AsyncClient` (with full SSL context initialization, connection pool setup, and event loop registration) on every single API call. Reusing a single persistent `httpx.AsyncClient` instance across all requests eliminates this per-call overhead and enables TCP/TLS connection reuse, which should reduce p95 latency on the upstream calls by avoiding repeated handshake costs. ## Expected Impact Each upstream HTTP call currently pays: object allocatinvalidmsms%skip2026-04-09T14:25:32.039028+00:00
Facebook page posts checker47## Hypothesis The `RedisCache` methods perform separate `json.dumps`/`json.loads` calls using the standard library's `json` module for every cache read and write. Replacing these with `orjson` (which is 2-10x faster for serialization/deserialization) will reduce the CPU overhead on every cache hit path, directly lowering p95 latency for the high-frequency `get_run_status` and `get_dataset_items` endpoints that serve most traffic from cache. ## Expected Impact Cache hits dominate steady-state invalidmsms%skip2026-04-09T14:25:02.009177+00:00
Facebook page posts checker46## Hypothesis The `get_dataset_items` route reconstructs `PostItem` Pydantic models one-by-one in a list comprehension (`[PostItem(**item) for item in items]`) even when items come from cache, adding unnecessary validation overhead for large datasets. Replacing this with direct dict passthrough and using `model_construct` (which skips validation) will reduce CPU time and p95 latency for dataset-heavy responses. **Expected impact:** For runs returning many posts, the per-item Pydantic constructinvalidmsms%skip2026-04-09T14:24:45.393447+00:00
Facebook page posts checker45## Hypothesis **Pre-build and reuse a single `httpx.AsyncClient` with a connection pool at the application level (stored in `app.state`), rather than creating and destroying a new client on every request in `ApifyClient`.** The current code calls `async with httpx.AsyncClient(...) as client:` for every upstream call, which pays TCP connection establishment + TLS handshake costs on each request. By creating one `AsyncClient` with `limits=httpx.Limits(max_keepalive_connections=20, max_connectioninvalidmsms%skip2026-04-09T14:24:01.297407+00:00
Facebook page posts checker44## Hypothesis **Replace the per-request `httpx.AsyncClient` instantiation in `ApifyClient` with a single shared persistent client created at application startup and injected via `app.state`**, eliminating TCP connection setup, TLS handshake, and object allocation overhead on every upstream call. Every call to `start_run`, `get_run_status`, or `get_dataset_items` currently creates a new `httpx.AsyncClient`, opens a fresh TCP+TLS connection to `api.apify.com`, completes the request, then tears iinvalidmsms%skip2026-04-09T14:23:19.843440+00:00
Facebook page posts checker43## Hypothesis **Pipeline the Redis `GET` (cache check) and lock-acquisition `SET NX` into a single round-trip using a Lua script or Redis pipeline in `get_dataset_items` and `start_run`**, eliminating one full Redis network round-trip on every cache-miss path. Currently, each cache-miss makes two sequential Redis calls: `GET` (cache check) → `SET NX` (acquire lock). Each call incurs a separate network RTT to Redis (~0.5–2 ms each). By combining them into a single pipelined or atomic Lua operatinvalidmsms%skip2026-04-09T14:22:44.961616+00:00
Facebook page posts checker42## Hypothesis **Use `ujson` (or the stdlib `json` with `orjson`) for all Redis serialization/deserialization in `cache.py` to reduce the CPU overhead of `json.dumps`/`json.loads` on the hot cache path.** Every cache `get` and `set` call performs a full JSON serialize/deserialize round-trip using Python's stdlib `json` module. Replacing this with `orjson` — which is 2–10× faster for typical dict/list payloads — will reduce the latency added by the cache layer itself, particularly for the `get_dinvalidmsms%skip2026-04-09T14:22:31.230039+00:00
Facebook page posts checker41## Hypothesis **Eliminate the per-request `httpx.AsyncClient` construction overhead by creating a single shared `AsyncClient` at application startup and reusing it across all requests via `app.state`.** Each call to `start_run`, `get_run_status`, or `get_dataset_items` currently creates and tears down a new `httpx.AsyncClient` (including SSL context initialization, connection pool setup, etc.). A persistent shared client with a connection pool allows TCP connections to be reused across requestinvalidmsms%skip2026-04-09T14:21:48.609523+00:00
Facebook page posts checker40## Hypothesis **Create the `httpx.AsyncClient` once per `ApifyClient` instance (stored at application startup) rather than instantiating a new client on every request.** Each `async with httpx.AsyncClient(...)` call currently pays TCP connection setup cost (and TLS handshake overhead) on every upstream call; a persistent client with a connection pool will reuse existing connections, eliminating that latency from the p95 path. ### Expected Impact The dominant latency source for cache-miss pathinvalidmsms%skip2026-04-09T14:21:07.311385+00:00
Facebook page posts checker39## Hypothesis **Pre-compile the Pydantic `RunInfo`, `PostItem`, and response models using `model_construct()` instead of full validation in the route handlers**, bypassing redundant field validation on data that has already been validated/shaped by the service layer. In the route handlers, data returned from `FacebookService` is already a plain dict from a trusted upstream (Apify) that has passed through the service layer — re-running Pydantic's full validator chain (coercion, constraint checkinvalidmsms%skip2026-04-09T14:20:32.723108+00:00
Facebook page posts checker38## Hypothesis **Eliminate the redundant `json.dumps`/`json.loads` round-trip in `RedisCache.set` by using the already-serialized string directly, and skip the intermediate Python object re-serialization in `get` by returning the raw string to callers that immediately re-serialize it — but more impactfully, replace the blocking `json.loads(raw)` in the hot-path `get` method with `orjson.loads` which is 2-3× faster for deserialization.** The Redis `get` call is on the critical path for every cacinvalidmsms%skip2026-04-09T14:20:09.441603+00:00
Facebook page posts checker37## Hypothesis The `wait_for_coalesced_result` method in `cache.py` uses a fixed 0.5-second polling interval, meaning coalesced (non-lock-winning) requests for `start_run` and `get_dataset_items` wait at least 500ms before receiving a result. Reducing this poll interval to 50ms for the first few attempts (exponential back-off starting at 50ms) will dramatically cut the p95 latency for requests that hit the coalesce path, since the lock winner typically completes the upstream call within 1-2 secoinvalidmsms%skip2026-04-09T14:19:53.026971+00:00
Facebook page posts checker36## Hypothesis **Persist a single `httpx.AsyncClient` instance at application startup (stored on `app.state`) and reuse it across all requests, rather than creating and destroying a new client per API call.** Every call to `start_run`, `get_run_status`, or `get_dataset_items` currently executes `async with httpx.AsyncClient(...) as client:`, which allocates a new client, establishes a new TCP connection (including TLS handshake to `api.apify.com`), and tears it down — adding 50–200 ms of connecinvalidmsms%skip2026-04-09T14:19:13.112705+00:00
Facebook page posts checker35## Hypothesis **Use `response_class=ORJSONResponse` for all route handlers to replace Pydantic's default JSON serialization with the faster `orjson` encoder, eliminating the double-serialization overhead (Pydantic model → dict → `json.dumps`) on every successful response path.** The default `JSONResponse` uses Python's `json.dumps`, and Pydantic's `.model_dump()` + re-serialization adds measurable CPU overhead on every request. `ORJSONResponse` (backed by `orjson`) is 3-10x faster at serializainvalidmsms%skip2026-04-09T14:18:51.640329+00:00
Facebook page posts checker34## Hypothesis **Skip the Redis cache lookup for `start_run` when there is no pre-existing cached result to read — the current code always calls `acquire_coalesce_lock` (a Redis SETNX round-trip) even on the hot path where the job simply needs to be started, adding ~1ms of Redis RTT before every upstream call. Instead, first check if a cached result already exists for the payload hash, and only enter the coalescing lock path on a cache miss.** ### Expected Impact The `start_run` path currentlyinvalidmsms%skip2026-04-09T14:18:32.365747+00:00
Facebook page posts checker33## Hypothesis **Parallelize the two sequential Redis operations in `get_dataset_items` — the dataset cache lookup and the run-status cache lookup — using `asyncio.gather`, and similarly parallelize the cache lookup + circuit-breaker check pattern.** In `get_dataset_items`, the code first does `await self._cache.get(cache_key)`, then later (after the upstream call) does a second `await self._cache.get(self._cache.run_status_key(run_id))` sequentially. More impactfully, we can parallelize the ininvalidmsms%skip2026-04-09T14:17:58.843698+00:00
Facebook page posts checker32## Hypothesis **Reuse a single persistent `httpx.AsyncClient` instance (with a connection pool) at the application level instead of creating and tearing down a new client on every HTTP call in `ApifyClient`.** Each call to `start_run`, `get_run_status`, or `get_dataset_items` currently does `async with httpx.AsyncClient(...) as client:` which allocates a new client, establishes a fresh TCP/TLS connection to `api.apify.com`, and then tears it all down — adding 50–200 ms of TLS handshake overheainvalidmsms%skip2026-04-09T14:17:01.524619+00:00
Facebook page posts checker31## Hypothesis **Use `ujson` for all JSON serialization/deserialization in the Redis cache layer instead of the standard `json` module, and also apply it to the httpx response parsing in the Apify client.** The standard `json` module is significantly slower than `ujson` for both encoding and decoding. Since every cache hit/miss involves `json.loads`/`json.dumps`, and every upstream API response involves JSON parsing, switching to `ujson` (which is a C extension) should reduce CPU time on the hoinvalidmsms%skip2026-04-09T14:16:30.892966+00:00
Facebook page posts checker30## Hypothesis **Cache the parsed `Authorization` header token extraction and eliminate the repeated string splitting overhead by inlining the token extraction logic, but more importantly: add an `asyncio.Lock`-based in-process coalescing layer in front of Redis to eliminate the Redis round-trips for the coalesce lock acquire/release on `start_run` and `get_dataset_items` — replacing the two Redis calls (`SETNX` + `DEL`) with a single in-memory check.** The `acquire_coalesce_lock` + `release_coinvalidmsms%skip2026-04-09T14:15:41.714324+00:00
Facebook page posts checker29## Hypothesis **Replace per-request `httpx.AsyncClient` instantiation (which incurs TCP connection setup overhead every call) with a single module-level persistent client that uses a `connection_pool` configured with `limits`, initialized once at application startup and injected into `ApifyClient`.** Every call to `start_run`, `get_run_status`, or `get_dataset_items` currently creates a new `httpx.AsyncClient` via `async with httpx.AsyncClient(...) as client`, which tears down and re-establishinvalidmsms%skip2026-04-09T14:15:00.483185+00:00
Facebook page posts checker28## Hypothesis **Pre-compute the `Authorization` header string once per request in `ApifyClient` rather than rebuilding it on every call, and eliminate the per-request `httpx.AsyncClient` construction overhead by using a module-level shared client with connection pooling configured at import time.** The current code constructs a new `httpx.AsyncClient` (which allocates a new connection pool, SSL context, etc.) on every single API call inside an `async with` block. This means every upstream requinvalidmsms%skip2026-04-09T14:14:31.720154+00:00
Facebook page posts checker27## Hypothesis **Use a module-level persistent `httpx.AsyncClient` instance with a `connection_pool` (via `limits`) that is created once at startup and reused across all requests, rather than creating and tearing down a new `AsyncClient` on every API call.** Each call to `start_run`, `get_run_status`, and `get_dataset_items` currently does `async with httpx.AsyncClient(...) as client:` which incurs TCP connection establishment overhead (and TLS handshake to `api.apify.com`) on every request. Byinvalidmsms%skip2026-04-09T14:13:53.589577+00:00
Facebook page posts checker26## Hypothesis **Replace `json.loads` / `json.dumps` in `RedisCache.get` and `RedisCache.set` with `orjson` for faster serialization, and simultaneously eliminate the per-call `httpx.AsyncClient` construction overhead in `ApifyClient` by instantiating a single shared `AsyncClient` at module level (created once, reused across all requests).** The dominant latency contributors on the hot path are: (1) Python's stdlib `json` is significantly slower than `orjson` for both serialization and deserialinvalidmsms%skip2026-04-09T14:13:13.334287+00:00
Facebook page posts checker25## Hypothesis **Disable Pydantic's response model serialization overhead by removing `response_model` from route decorators and returning pre-built `JSONResponse` objects directly**, eliminating the double-serialization pass (Pydantic validation + JSON encoding) that occurs on every response. When FastAPI uses `response_model`, it validates and re-serializes the returned object through Pydantic even if you already have a dict/model. By returning `JSONResponse` directly and dropping `response_minvalidmsms%skip2026-04-09T14:12:29.754248+00:00
Facebook page posts checker24## Hypothesis **Replace the polling-based `wait_for_coalesced_result` (which uses `asyncio.sleep(0.5)` intervals adding up to 500ms+ of wasted latency) with a Redis pub/sub notification so waiting coroutines are woken immediately when the winner writes the result.** When multiple concurrent requests coalesce on the same cache key, losers currently poll every 500ms — meaning p95 latency for coalesced requests includes at least one full 500ms sleep cycle. By publishing a notification on the cachinvalidmsms%skip2026-04-09T14:12:02.998889+00:00
Facebook page posts checker23## Hypothesis **Use a module-level persistent `httpx.AsyncClient` with a connection pool (initialized at app startup, shared across all requests) instead of creating and tearing down a new client on every API call.** Every request to `apify_client.py` currently does `async with httpx.AsyncClient(...) as client:` which pays TCP connection establishment + TLS handshake overhead on each call. A persistent client with `limits=httpx.Limits(max_keepalive_connections=10, max_connections=20)` will reuinvalidmsms%skip2026-04-09T14:11:23.051172+00:00
Facebook page posts checker22## Hypothesis **Reduce the Redis round-trips in `get_run_status` and `get_dataset_items` by using `asyncio.gather` to fire the circuit-breaker check and cache lookup concurrently — but more impactfully, eliminate the second Redis `GET` call inside `get_dataset_items` (the run-status cache lookup) by folding it into a single pipelined request.** Specifically, the `get_dataset_items` method makes two sequential Redis `GET` calls (one for the dataset cache key, one for the run-status key to decidinvalidmsms%skip2026-04-09T14:10:54.479189+00:00
Facebook page posts checker21## Hypothesis **Skip Redis cache lookup for the `start_run` path when no coalesce lock exists, and eliminate the double Redis round-trip (lock acquire + cache get) by checking the cache *before* attempting to acquire the coalesce lock.** Currently in `get_dataset_items` and `start_run`, the code acquires a coalesce lock and *then* waits, but for `get_run_status` the cache is checked first (correct pattern). The `start_run` path does zero cache read before the lock — meaning every request pays invalidmsms%skip2026-04-09T14:10:37.738220+00:00
Facebook page posts checker20## Hypothesis **Maintain a single, long-lived `httpx.AsyncClient` with a connection pool at the application level (stored on `app.state`), shared across all requests, instead of creating and tearing down a new client per API call.** Every request to `ApifyClient` currently executes `async with httpx.AsyncClient(...) as client:`, which creates a new client, establishes a fresh TCP connection (or waits for one from the OS), completes the TLS handshake, and then closes everything — paying this ovinvalidmsms%skip2026-04-09T14:09:54.400626+00:00
Facebook page posts checker19## Hypothesis **Use `orjson` for all JSON serialization/deserialization in the Redis cache layer**, replacing the standard `json` module with `orjson` which is a Rust-based implementation that is typically 2-10x faster for both `dumps` and `loads` operations. The cache `get` and `set` operations are on the critical path for every request. Faster JSON encoding/decoding reduces the CPU time spent serializing cached payloads, directly cutting latency for cache-hit paths (which are the fast path) invalidmsms%skip2026-04-09T14:09:24.228194+00:00
Facebook page posts checker18## Hypothesis **Eliminate the redundant `await self._cache.release_coalesce_lock(cache_key)` call in the `finally` block of `start_run` and `get_dataset_items` by replacing it with a single atomic `SET ... EX ... GET` pipeline that combines the lock acquisition, result storage, and lock release into fewer round-trips — but more practically, skip the coalescing lock entirely for `get_dataset_items` when the cache already missed, since the lock+poll adds at least one extra Redis RTT on every cachinvalidmsms%skip2026-04-09T14:08:55.729844+00:00
Facebook page posts checker17## Hypothesis **Pre-compute and reuse `httpx` connection pools by creating a single `httpx.AsyncClient` at application startup (stored in `app.state`) rather than instantiating a new client (and its underlying connection pool) on every single API call.** Each call to `start_run`, `get_run_status`, and `get_dataset_items` currently does `async with httpx.AsyncClient(...) as client:`, which creates a new client, establishes a fresh TCP connection (including TLS handshake) to `api.apify.com`, andinvalidmsms%skip2026-04-09T14:08:17.275375+00:00
Facebook page posts checker16## Hypothesis **Disable Pydantic response model validation on the route handlers by setting `response_model=None` and returning pre-built dicts directly as `JSONResponse`, eliminating the per-request Pydantic serialization/validation overhead on the hot response path.** The current routes construct Pydantic model instances (`RunJobResponse`, `RunStatusResponse`, `DatasetItemsResponse`) on every request, which triggers field validation, type coercion, and JSON serialization through FastAPI's reinvalidmsms%skip2026-04-09T14:07:46.944244+00:00
Facebook page posts checker15## Hypothesis **Add `response_model=None` and return raw `JSONResponse` objects directly in the route handlers to bypass FastAPI's Pydantic response serialization/validation overhead on the hot path.** FastAPI's default behavior validates and re-serializes the return value through the `response_model` on every response, which involves Pydantic model instantiation, field validation, and a second `json.dumps` pass. By constructing the `JSONResponse` directly from already-validated dicts (skippininvalidmsms%skip2026-04-09T14:07:11.446584+00:00
Facebook page posts checker14## Hypothesis **Pipeline the Redis `GET` (cache check) and `SET` operations using `asyncio.gather` where independent lookups occur together, and more importantly: in `get_dataset_items`, eliminate the sequential Redis round-trip to check run status by storing the terminal-status flag inline with the dataset cache key — replacing the extra `await self._cache.get(self._cache.run_status_key(run_id))` call that currently happens *after* the expensive upstream fetch.** The extra Redis lookup in `geinvalidmsms%skip2026-04-09T14:06:52.632411+00:00
Facebook page posts checker13## Hypothesis **Replace per-request `httpx.AsyncClient` instantiation with a module-level persistent client that is initialized once at application startup and reused across all requests, avoiding the TCP connection setup overhead on every API call.** Every method in `ApifyClient` currently creates a new `httpx.AsyncClient` via `async with httpx.AsyncClient(...) as client:`, which tears down and recreates the underlying connection pool on each call. Even with HTTP keep-alive, the client objectinvalidmsms%skip2026-04-09T14:06:06.319863+00:00
Facebook page posts checker12## Hypothesis **Use `ujson` (or `orjson`) instead of the standard `json` module for all serialization/deserialization in the cache layer, and avoid double-encoding by storing raw bytes directly.** The cache layer currently calls `json.dumps`/`json.loads` on every get/set operation using Python's stdlib `json`, which is implemented in pure Python and is significantly slower than C-extension alternatives like `orjson`. Since every request path touches Redis cache (at minimum a GET, often a SET),invalidmsms%skip2026-04-09T14:05:33.612947+00:00
Facebook page posts checker11## Hypothesis **Reuse a single persistent `httpx.AsyncClient` instance (with connection pooling) across all requests by creating it once at application startup and storing it on `app.state`, rather than instantiating a new client per request.** Every call to `start_run`, `get_run_status`, or `get_dataset_items` currently creates a new `httpx.AsyncClient` via `async with httpx.AsyncClient(...) as client:`, which means a fresh TCP connection (and TLS handshake) to `api.apify.com` is established invalidmsms%skip2026-04-09T14:04:52.600625+00:00
Facebook page posts checker10## Hypothesis **Skip the Redis cache lookup for `start_run` when no coalescing lock exists, and avoid the double Redis round-trips (lock acquire + cache get) by checking the cache first before attempting to acquire the coalesce lock.** Currently in `start_run`, the code immediately tries to acquire a coalesce lock without first checking if a cached result already exists for that payload hash. This means every cache-hit scenario still pays for a `SET NX` Redis round-trip before it can return. Binvalidmsms%skip2026-04-09T14:04:36.117867+00:00
Facebook page posts checker9## Hypothesis **Parallelize the Redis cache lookup and circuit-breaker check with the actual upstream call preparation by overlapping the Redis `get` for `run_status` with a concurrent Redis `get` for the dataset cache key in `get_dataset_items`, and more importantly: pipeline the two Redis operations (`get dataset_key` + `get run_status_key`) in `get_dataset_items` into a single round-trip using `asyncio.gather`.** Specifically, in `get_dataset_items`, the code currently does a sequential `cainvalidmsms%skip2026-04-09T14:04:08.028853+00:00
Facebook page posts checker8## Hypothesis **Replace the blocking `json.dumps`/`json.loads` calls in the cache layer with `orjson`, which is a Rust-backed JSON library that is 2-10x faster for serialization/deserialization, reducing the CPU-bound overhead on every cache read and write path.** Every cache hit (the fast path) still pays the cost of `json.loads` on the raw Redis string, and every cache miss pays `json.dumps` before writing. With `orjson`, these operations become significantly cheaper, directly reducing lateninvalidmsms%skip2026-04-09T14:03:44.714288+00:00
Facebook page posts checker7## Hypothesis **Avoid creating a new `httpx.AsyncClient` on every request by using a module-level persistent client with connection pooling in `apify_client.py`.** Every call to `start_run`, `get_run_status`, or `get_dataset_items` currently creates a new `httpx.AsyncClient` via `async with httpx.AsyncClient(...) as client:`, which forces a fresh TCP/TLS handshake to `api.apify.com` on each request. A persistent client reuses existing connections from its pool, eliminating the ~50-150ms TLS neinvalidmsms%skip2026-04-09T14:03:01.876168+00:00
Facebook page posts checker6## Hypothesis **Pre-serialize JSON once in `cache.set` and skip double-serialization by storing the raw string, while also avoiding redundant `json.loads`/`json.dumps` round-trips in the hot path by using `json.loads` only once on cache hit.** The current code calls `json.dumps(value)` in `cache.set` and `json.loads(raw)` in `cache.get` on every cache interaction — but in `facebook_service.py` the values being stored are already plain Python dicts/lists that came from `httpx`'s `resp.json()` (invalidmsms%skip2026-04-09T14:02:38.061821+00:00
Facebook page posts checker5## Hypothesis **Replace the 0.5-second fixed polling interval in `wait_for_coalesced_result` with exponential backoff starting at 50ms, and reduce the coalescing lock TTL mismatch overhead by using `asyncio.gather` to parallelize the Redis cache read and circuit-breaker-safe path in `get_run_status` and `get_dataset_items`.** More specifically: the `wait_for_coalesced_result` method currently sleeps 500ms between polls, meaning coalesced requests always wait at least 500ms even if the winner finvalidmsms%skip2026-04-09T14:02:19.337117+00:00
Facebook page posts checker4## Hypothesis **Reuse a persistent `httpx.AsyncClient` with a connection pool at the application level (initialized in lifespan) instead of creating and destroying a new client on every request.** Each call to `ApifyClient.start_run/get_run_status/get_dataset_items` currently does `async with httpx.AsyncClient(...) as client:` which creates a new client, establishes a new TCP+TLS connection, and tears it down — adding ~50-200ms of overhead per request (TLS handshake alone). A shared client witinvalidmsms%skip2026-04-09T14:01:36.399041+00:00
Facebook page posts checker3## Hypothesis **Pipeline Redis cache reads and writes using `asyncio.gather` where multiple independent Redis operations occur sequentially**, specifically in `get_dataset_items` where a cache check for dataset items is followed by a separate cache check for run status — replacing the sequential `await` calls with concurrent execution. ### Expected Impact In `get_dataset_items`, after fetching from upstream, the code does a sequential `await self._cache.get(self._cache.run_status_key(run_id))invalidmsms%skip2026-04-09T14:01:09.277349+00:00
Facebook page posts checker2## Hypothesis **Replace per-request `httpx.AsyncClient` instantiation with a module-level persistent client that uses connection pooling and keepalive**, fixing the root cause that caused iter 0/1 to be reverted — by initializing the client during app lifespan (so it's properly managed) and injecting it via `app.state` rather than creating a new TCP connection on every request. The previous attempts likely failed because the client was created as a module-level global without proper lifecycle invalidmsms%skip2026-04-09T14:00:30.212898+00:00
Facebook page posts checker1## Hypothesis **Use a persistent (singleton) `httpx.AsyncClient` with connection pooling instead of creating a new client per request.** Currently, every call to `start_run`, `get_run_status`, or `get_dataset_items` creates a new `httpx.AsyncClient` inside an `async with` block, which incurs TCP handshake + TLS negotiation overhead on every upstream call. By instantiating a single `AsyncClient` at app startup and reusing it across all requests, connections to `api.apify.com` will be pooled andinvalidmsms%skip2026-04-09T13:59:51.383187+00:00
Facebook page posts checker0## Hypothesis **Reuse a persistent `httpx.AsyncClient` instance across requests instead of creating and tearing down a new client per API call.** Each call to `start_run`, `get_run_status`, and `get_dataset_items` currently opens a new TCP connection (including TLS handshake to `api.apify.com`), which adds 50–300ms of overhead to every uncached request and dominates p95 latency. ### Why This Matters In `services/apify_client.py`, every method does: ```python async with httpx.AsyncClient(timeoinvalidmsms%skip2026-04-09T13:59:14.554400+00:00
LinkedIn Job Scraper49## Hypothesis The `apify_client.py` creates a brand-new `httpx.AsyncClient` (and thus a new TCP connection) on every single upstream call via `async with _build_client() as client:`. Reusing a persistent, module-level `httpx.AsyncClient` with a connection pool eliminates the TCP handshake + TLS negotiation overhead on every request, which dominates p95 latency for cache-miss paths. ## Expected Impact Each upstream call currently pays TCP connect + TLS setup (~50–200ms). A persistent client wiinvalidmsms%skip2026-04-09T02:26:09.480467+00:00
LinkedIn Job Scraper48## Hypothesis The `RedisCache` class creates a new Redis connection pool on every `connect()` call without configuring connection pool size, and the `apify_client.py` `_build_client()` creates a brand-new `httpx.AsyncClient` (with its own connection pool) on every single HTTP request. Reusing a single persistent `httpx.AsyncClient` instance (initialized at startup and stored on `app.state`) eliminates repeated TCP/TLS handshake overhead and connection pool warm-up costs on every upstream call, invalidmsms%skip2026-04-09T02:24:56.227840+00:00
LinkedIn Job Scraper47## Hypothesis The `get_dataset` service method reconstructs `JobItem` Pydantic model instances from cached data (iterating `[JobItem(**item) for item in cached.get("items", [])]`), and then the route handler immediately calls `.model_dump()` on the `DatasetResponse` to serialize everything back to JSON — this double round-trip through Pydantic validation is pure overhead on cache hits. By storing and returning the raw dicts directly from cache without materializing Pydantic objects, we avoid reinvalidmsms%skip2026-04-09T02:24:25.037815+00:00
LinkedIn Job Scraper46## Hypothesis The `RedisCache.make_key`, `run_key`, and `dataset_key` methods use `json.dumps` with `sort_keys=True` and `hashlib.sha256` on every request — even on cache hits where the key is computed twice (once in the route handler for `is_cached` check, and once inside the service). The route handlers in `routes/jobs.py` each call `await cache.get(cache_key)` independently *before* the service layer also calls `await cache.get(cache_key)`, resulting in **two Redis round-trips on every cacheinvalidmsms%skip2026-04-09T02:23:27.901297+00:00
LinkedIn Job Scraper45## Hypothesis The `routes/jobs.py` `get_run` handler calls `await cache.get(cache_key)` to check `is_cached`, and then `job_service.get_run()` calls `await cache.get(cache_key)` **again** — doubling the Redis round-trips on every cache miss, and doing two sequential Redis calls even on cache hits. By passing the already-fetched cached value directly into the service layer (or merging the cache check into the service and returning a flag), we can eliminate one Redis round-trip per request. ## Einvalidmsms%skip2026-04-09T02:22:43.079752+00:00
LinkedIn Job Scraper44## Hypothesis The `get_dataset` route handler calls `await cache.get(cache_key)` once to check `is_cached`, then `job_service.get_dataset()` calls `await cache.get(cache_key)` **again** — two sequential Redis round-trips for every cached request. Eliminating the redundant first `cache.get` call in the route handler and instead inferring `is_cached` from whether the service returned data from cache (via a simple flag or by checking before the call) will save one Redis RTT (~0.5–2ms) on every cacinvalidmsms%skip2026-04-09T02:22:01.066001+00:00
LinkedIn Job Scraper43## Hypothesis The `_normalize_job` function in `job_service.py` performs redundant `dict.get()` calls with fallback chains for every field on every job item during deserialization, and then `JobItem(**_normalize_job(item))` constructs a Pydantic model with full validation for each item — but when serving from cache, the code re-instantiates `JobItem` objects from already-validated dicts only to immediately call `.model_dump()` on the response. The cache hit path in `get_dataset` unnecessarily rinvalidmsms%skip2026-04-09T02:21:23.078165+00:00
LinkedIn Job Scraper42## Hypothesis The `routes/jobs.py` `get_dataset` handler calls `await cache.get(cache_key)` twice — once to set `is_cached` and once implicitly inside `job_service.get_dataset()` — adding an unnecessary extra Redis round-trip on every request. Eliminating the redundant preflight `cache.get` call in the route handler and instead inferring `is_cached` from whether `job_service.get_dataset` returned quickly (or by threading the cache-hit flag through the service return value) will save one full Reinvalidmsms%skip2026-04-09T02:20:40.365660+00:00
LinkedIn Job Scraper41## Hypothesis The `_build_client()` function in `apify_client.py` creates a brand-new `httpx.AsyncClient` instance on every single API call, which means a new TCP connection (and TLS handshake) must be established for each upstream request. By creating a single persistent `AsyncClient` instance at module level (with connection pooling), subsequent requests to the same host will reuse existing connections, eliminating connection setup overhead and significantly reducing p95 latency for cache-misinvalidmsms%skip2026-04-09T02:19:48.868668+00:00
LinkedIn Job Scraper40## Hypothesis The `get_dataset` service method deserializes cached data by constructing individual `JobItem(**item)` Pydantic model instances for every item in the list, then the route handler immediately re-serializes them via `.model_dump()` for the JSON response. This double-conversion (dict → Pydantic model → dict) is pure CPU overhead on the hot cache-hit path; we can skip it by returning the raw dicts directly from cache and only constructing `JobItem` models when needed for validation oninvalidmsms%skip2026-04-09T02:19:07.105999+00:00
LinkedIn Job Scraper39## Hypothesis The `RedisCache.wait_for_coalesced` method uses a fixed polling interval of 500ms, meaning coalesced requests that arrive while another coroutine is fetching upstream data waste up to 500ms per poll cycle waiting for the cache to populate. Reducing the initial polling interval with exponential backoff (starting at 50ms) will significantly reduce p95 latency for coalesced requests that resolve quickly. ## Expected Impact For requests that are coalesced behind an in-flight upstreainvalidmsms%skip2026-04-09T02:18:34.750241+00:00
LinkedIn Job Scraper38## Hypothesis The `get_dataset` route handler in `routes/jobs.py` calls `await cache.get(cache_key)` to check `is_cached`, and then `job_service.get_dataset(...)` internally calls `await cache.get(cache_key)` **again** — resulting in two sequential Redis round-trips on every cache-miss path and two round-trips on every cache-hit path. Eliminating the redundant first `cache.get` call in the route handler (using the `cached` flag returned from the service instead) will remove one Redis RTT from einvalidmsms%skip2026-04-09T02:17:38.428699+00:00
LinkedIn Job Scraper37## Hypothesis The `ApifyClient` methods (`start_run`, `get_run`, `get_dataset`) each create a new `httpx.AsyncClient` instance per call via `_build_client()`, incurring TCP connection establishment overhead on every upstream request. By using a single shared persistent `httpx.AsyncClient` with connection pooling (initialized at startup and reused across all calls), we eliminate repeated connection setup costs, reducing p95 latency particularly for cache-miss paths where upstream calls are unavoinvalidmsms%skip2026-04-09T02:16:56.583751+00:00
LinkedIn Job Scraper36## Hypothesis The `get_run` route handler calls `await cache.get(cache_key)` twice — once to set `is_cached` and again inside `job_service.get_run()` — doubling the Redis round-trip overhead on every request. Eliminating the redundant first `cache.get` call in the route handler and instead inferring `is_cached` from whether `job_service.get_run` returned without hitting upstream (by having the service signal cache hits) will halve the Redis calls on the hot path. ## Expected Impact For cachedinvalidmsms%skip2026-04-09T02:16:19.836683+00:00
LinkedIn Job Scraper35## Hypothesis The `get_dataset` route handler in `routes/jobs.py` calls `await cache.get(cache_key)` to check `is_cached`, and then `JobService.get_dataset` calls `await cache.get(cache_key)` **again** for the same key — doubling the Redis round-trips on every cache-hit path. Eliminating the redundant first `cache.get` in the route handler (deriving `is_cached` from the service response instead) will cut one Redis RTT (~0.5–2ms) from every cached request, meaningfully reducing p95 latency for tinvalidmsms%skip2026-04-09T02:15:15.350172+00:00
LinkedIn Job Scraper34## Hypothesis The `get_dataset` route handler in `routes/jobs.py` reconstructs `JobItem` Pydantic objects from cached data (via `job_service.get_dataset`) only to immediately call `.model_dump()` on them again for the JSON response — this double-serialization (deserialize cached dicts → JobItem objects → dicts) is pure overhead on every cache hit. By bypassing Pydantic object construction when data is already cached and returning the cached dicts directly, we can eliminate this redundant work oinvalidmsms%skip2026-04-09T02:14:41.210515+00:00
LinkedIn Job Scraper33## Hypothesis The `apify_client.py` creates a new `httpx.AsyncClient` instance for every request (inside `_build_client()`), which incurs TCP connection establishment overhead on every upstream call. By using a single persistent `httpx.AsyncClient` with connection pooling shared across requests, we eliminate repeated TCP handshake and TLS negotiation costs, directly reducing p95 latency for cache-miss paths. ## Expected Impact Each upstream HTTP call currently pays the full connection setup cinvalidmsms%skip2026-04-09T02:13:59.581308+00:00
LinkedIn Job Scraper32## Hypothesis The `get_run` and `get_dataset` route handlers each perform a redundant Redis `GET` call (to determine `is_cached`) immediately before `job_service.get_run/get_dataset` which also performs its own Redis `GET`. This doubles the Redis round-trips on every cache-miss path and adds unnecessary latency on the cache-hit path too. Eliminating the pre-check `GET` calls in the route handlers and instead inferring `cached` status from whether the service call was a cache hit (by threading tinvalidmsms%skip2026-04-09T02:13:09.534995+00:00
LinkedIn Job Scraper31## Hypothesis The `routes/jobs.py` handlers for `get_run` and `get_dataset` each perform a redundant Redis `GET` call (to set the `is_cached` boolean flag) immediately before `job_service.get_run/get_dataset` which performs its own Redis `GET` internally. This doubles the number of Redis round-trips on every request, adding unnecessary latency to the p95 path. **Expected impact:** Eliminating the duplicate Redis `GET` in both route handlers removes one full Redis round-trip per request (typicainvalidmsms%skip2026-04-09T02:12:33.596069+00:00
LinkedIn Job Scraper30## Hypothesis The `_normalize_job` function is called for every job item in `get_dataset`, and each call performs ~18 redundant `dict.get()` lookups with fallback keys even when the primary key exists. By restructuring `_normalize_job` to use a compact, pre-defined field mapping and short-circuit on first non-None value, we reduce per-item CPU overhead in large dataset responses. **Expected impact:** For datasets with 100–1000 job items, the aggregate normalization cost is non-trivial pure-Pytinvalidmsms%skip2026-04-09T02:11:53.200021+00:00
LinkedIn Job Scraper29## Hypothesis The `JobService.get_dataset` method calls `[JobItem(**_normalize_job(item)) for item in raw_items]` which constructs full Pydantic model instances (with validation) for every job item on both cache miss AND cache hit paths — but on cache hits it does `[JobItem(**item) for item in cached.get("items", [])]` which also validates every field. Since the route handler immediately calls `.model_dump()` on the entire `DatasetResponse`, we can skip Pydantic model construction entirely and invalidmsms%skip2026-04-09T02:11:07.028798+00:00
LinkedIn Job Scraper28## Hypothesis **The `RedisCache.make_key`, `run_key`, and `dataset_key` methods use `hashlib.sha256` which, while fast, involves Python-level string encoding and digest computation on every cache lookup. More critically, `make_key` calls `json.dumps` with `sort_keys=True` and `default=str` on every invocation — but the dominant cost is that `get_dataset` and `get_run` in the route handlers each call `cache.get(cache_key)` to check `is_cached`, then `job_service` calls `cache.get(cache_key)` *aginvalidmsms%skip2026-04-09T02:10:23.057182+00:00
LinkedIn Job Scraper27## Hypothesis **The `get_run` and `get_dataset` route handlers each perform a redundant Redis `GET` call solely to set the `is_cached` boolean flag, then the service layer immediately performs another `GET` for the same key — resulting in two sequential Redis round-trips on every cache miss and cache hit path.** Eliminating the speculative `is_cached` pre-check in the route handlers and instead inferring the cached status from whether the service layer returned data that was already in cache (invalidmsms%skip2026-04-09T02:09:40.603279+00:00
LinkedIn Job Scraper26## Hypothesis **The `apify_client.py` creates a new `httpx.AsyncClient` instance for every upstream request (inside `_build_client()`), incurring TCP connection establishment overhead on every call. Reusing a single persistent `AsyncClient` with connection pooling will eliminate this repeated connect latency, directly reducing p95 for cache-miss paths.** ### Expected Impact Each `async with _build_client() as client:` block creates a fresh client, performs a new TCP handshake (and TLS negotiainvalidmsms%skip2026-04-09T02:08:41.506799+00:00
LinkedIn Job Scraper25## Hypothesis **The `RedisCache` class in `cache.py` calls `json.dumps` and `json.loads` on every cache read/write, but the `make_key` method *also* calls `json.dumps` on the same payload — meaning the payload is serialized twice per cache-miss request path. Additionally, `json.dumps(value)` in `cache.set` re-serializes data that was already deserialized from a prior `json.loads` in `cache.get`. By using `orjson` (a 3–10× faster JSON library) for all serialization/deserialization operations in invalidmsms%skip2026-04-09T02:08:05.993629+00:00
LinkedIn Job Scraper24## Hypothesis **The `routes/jobs.py` handlers for `get_run` and `get_dataset` each perform a redundant `cache.get()` call solely to set the `is_cached` boolean flag, doubling the Redis round-trips on cache misses — this extra RTT directly inflates p95 latency for the common cache-miss path.** The fix is to eliminate the speculative `cache.get()` in the route handlers and instead have `JobService` return a `(result, is_cached)` tuple, so the cache lookup happens exactly once inside the service invalidmsms%skip2026-04-09T02:07:28.488746+00:00
LinkedIn Job Scraper23## Hypothesis **The `get_run` and `get_dataset` route handlers perform a redundant double Redis `GET` — one to check `is_cached` for the response envelope flag, and then the service layer performs another `GET` for the actual data — doubling Redis round-trips on every cache-hit path. Eliminating the speculative `is_cached` pre-fetch in the route handlers and instead having the service return a `(result, from_cache)` tuple will halve Redis latency on the hot cache-hit path.** ### Expected Impacinvalidmsms%skip2026-04-09T02:06:46.516251+00:00
LinkedIn Job Scraper22## Hypothesis **The `apify_client.py` creates a new `httpx.AsyncClient` for every request via `_build_client()`, incurring TCP connection establishment overhead on every upstream call.** By maintaining a single persistent `AsyncClient` with connection pooling as an instance attribute on `ApifyClient`, we eliminate repeated TCP handshakes and TLS negotiations, which should meaningfully reduce p95 latency for cache-miss paths. ### Expected Impact Each `async with _build_client() as client:` bloinvalidmsms%skip2026-04-09T02:05:45.638679+00:00
LinkedIn Job Scraper21## Hypothesis **The `JobService.get_dataset` method unnecessarily reconstructs `JobItem` Pydantic model instances from cached data (via `[JobItem(**item) for item in cached.get("items", [])]`) only to immediately call `.model_dump()` on them in the route handler — this double-serialization round-trip through Pydantic validation is pure overhead on every cache hit.** On cache hits (which should be the majority of requests after warm-up), the service deserializes raw dicts into `JobItem` objectsinvalidmsms%skip2026-04-09T02:04:52.993207+00:00
LinkedIn Job Scraper20## Hypothesis **The `RedisCache.wait_for_coalesced` method uses a fixed 0.5-second polling interval with `asyncio.sleep`, meaning coalesced requests wait on average 250ms of unnecessary idle time per poll cycle before discovering a cached result; replacing this with an adaptive exponential backoff starting at 50ms would reduce the average wait overhead by ~80% for fast upstream responses.** ### Expected Impact When a second (or third) concurrent request hits the coalesce lock, it enters a polinvalidmsms%skip2026-04-09T02:04:18.521217+00:00
LinkedIn Job Scraper19## Hypothesis **The `get_run` and `get_dataset` route handlers each perform a redundant `cache.get()` call solely to set the `is_cached` boolean flag, duplicating the cache lookup that `job_service` will perform milliseconds later — replacing this with a sentinel value embedded in the service response eliminates one round-trip Redis call per request on the hot path.** ### Expected Impact On cached requests (the common case after warm-up), each handler currently makes **two** sequential Redis invalidmsms%skip2026-04-09T02:03:44.389460+00:00
LinkedIn Job Scraper18## Hypothesis **The `_normalize_job` function in `job_service.py` performs 18 redundant `dict.get()` calls with fallback chaining on every job item during both cache misses (upstream fetch) and cache hits (deserialization), when cache hits should bypass normalization entirely by storing already-normalized data and reconstructing `JobItem` objects directly from the cached dicts without re-running `_normalize_job`.** Wait — the cache hit path already skips `_normalize_job` (it does `JobItem(**itinvalidmsms%skip2026-04-09T02:02:56.435750+00:00
LinkedIn Job Scraper17## Hypothesis **The `routes/jobs.py` handlers for `get_run` and `get_dataset` each call `cache.get(cache_key)` twice — once to check `is_cached` for the response envelope, and then again inside `job_service.get_run/get_dataset` — doubling Redis round-trips on every cache miss and adding an extra RTT on cache hits.** Eliminating the redundant preflight `cache.get` call in the route handlers (deriving `is_cached` from whether the service returned a cached result via a small return-value signal, invalidmsms%skip2026-04-09T02:02:10.971735+00:00
LinkedIn Job Scraper16## Hypothesis **The `routes/jobs.py` handlers for `get_run` and `get_dataset` each perform a redundant `cache.get()` call solely to set the `is_cached` boolean flag, and then `job_service.get_run/get_dataset` performs another `cache.get()` internally — resulting in two Redis round-trips on every cache-hit path where only one is needed.** By eliminating the redundant upfront `cache.get()` in the route handlers and instead inferring `is_cached` from whether the service returned data without an uinvalidmsms%skip2026-04-09T02:01:37.383473+00:00
LinkedIn Job Scraper15## Hypothesis **The `_build_client()` function in `apify_client.py` creates a brand-new `httpx.AsyncClient` (and underlying connection pool) on every single upstream request, discarding it immediately after — this means every call pays TCP connection establishment overhead instead of reusing keep-alive connections.** By replacing the per-request `async with _build_client() as client` pattern with a single long-lived `httpx.AsyncClient` instance that persists for the application lifetime (initiinvalidmsms%skip2026-04-09T02:00:36.037976+00:00
LinkedIn Job Scraper14## Hypothesis **The `get_dataset` service method deserializes cached JSON into `JobItem` Pydantic objects (via `[JobItem(**item) for item in cached.get("items", [])]`) on every cache hit, which is expensive for large result sets — instead, the route handler only needs the raw dicts to pass into `DatasetResponse`, so we can return the raw cached data directly and defer Pydantic validation only to the final response serialization step.** ### Explanation On a cache hit for `get_dataset`, the curinvalidmsms%skip2026-04-09T02:00:13.516618+00:00
LinkedIn Job Scraper13## Hypothesis **The `get_run` and `get_dataset` route handlers each perform a redundant cache lookup (`await cache.get(cache_key) is not None`) solely to set the `is_cached` boolean flag, and then `JobService` performs the *same* cache lookup again internally — doubling Redis round-trips on every cache-miss path and adding an extra RTT on the cache-hit path.** The fix is to eliminate the route-level pre-check and instead have `JobService` return a `(result, is_cached)` tuple, so the cache is cinvalidmsms%skip2026-04-09T01:59:11.146380+00:00
LinkedIn Job Scraper12## Hypothesis **The `RedisCache.get` and `RedisCache.set` methods use Python's `json.loads`/`json.dumps` for serialization, but for large dataset payloads the bottleneck is CPU-bound JSON parsing of potentially hundreds of job items — switching to `orjson` (a Rust-backed JSON library) will significantly reduce serialization/deserialization overhead and cut p95 latency on cache-hit paths.** ### Expected Impact Cache hits dominate the hot path for repeated dataset/run requests. The current codeinvalidmsms%skip2026-04-09T01:58:45.856318+00:00
LinkedIn Job Scraper11## Hypothesis **The `ApifyClient` creates a new `httpx.AsyncClient` for every single upstream request (including TLS handshake and connection setup overhead), and all 10 previous attempts to fix this were reverted. The root cause of those reversions is likely that a single module-level client was shared unsafely or closed prematurely — so instead, we should attach a persistent `httpx.AsyncClient` to `app.state` during lifespan startup and inject it into `ApifyClient`, reusing the connection pooinvalidmsms%skip2026-04-09T01:57:40.486161+00:00
LinkedIn Job Scraper10## Hypothesis **The `RedisCache.wait_for_coalesced` method uses a fixed 0.5-second polling interval, meaning coalesced requests waste up to 500ms of unnecessary waiting per poll cycle. Switching to exponential backoff starting at 50ms (capped at 500ms) will reduce the average wait time for coalesced requests significantly, cutting p95 latency for concurrent duplicate requests.** ### Expected Impact When multiple identical requests arrive simultaneously, all but the first are coalesced and polinvalidmsms%skip2026-04-09T01:57:03.338862+00:00
LinkedIn Job Scraper9## Hypothesis **The `get_run` and `get_dataset` route handlers perform a redundant cache read (`await cache.get(cache_key) is not None`) solely to set the `is_cached` boolean flag, while `JobService` then performs the *same* cache read again internally — resulting in two sequential Redis round-trips on every cache-miss path and wasted latency on every cache-hit path.** The fix is to eliminate the duplicate cache read in the route handlers entirely and instead have `JobService` return a `(resulinvalidmsms%skip2026-04-09T01:56:15.252907+00:00
LinkedIn Job Scraper8## Hypothesis **The `JobService.get_dataset` method deserializes cached data by constructing `JobItem(**item)` for every item in the list on cache hits, which is pure CPU overhead that adds latency proportional to dataset size. Returning the raw cached list directly (already in the correct dict shape) and only materializing `JobItem` objects when needed — or skipping Pydantic model construction entirely for the cached path — will reduce p95 latency on cache-hit requests.** ### Explanation On invalidmsms%skip2026-04-09T01:55:43.996359+00:00
LinkedIn Job Scraper7## Hypothesis **The `get_run` and `get_dataset` route handlers each perform a redundant Redis `GET` call before delegating to `JobService`, which itself performs another `GET` — resulting in 2 serial Redis round-trips on every cache-miss path and wasting ~1–5ms per request. Eliminating the redundant pre-check `GET` in the route handlers (used only to set the `is_cached` flag) and instead inferring the cache status from whether `JobService` returned a cached result will halve the number of Redisinvalidmsms%skip2026-04-09T01:55:14.323807+00:00
LinkedIn Job Scraper6## Hypothesis **The `get_run` and `get_dataset` route handlers perform a redundant double Redis GET — once to set `is_cached` for the response envelope, then again implicitly inside `job_service.get_run/get_dataset` — adding an unnecessary round-trip to Redis on every cache-miss path. Eliminating this duplicate lookup by having the service layer return a cache-hit flag directly will save one Redis RTT (~0.5–2ms) on every request, which compounds significantly at p95 when Redis latency is variabinvalidmsms%skip2026-04-09T01:54:23.944281+00:00
LinkedIn Job Scraper5## Hypothesis **The `get_run` and `get_dataset` route handlers each perform a redundant Redis `GET` call solely to set the `is_cached` boolean flag, and then `JobService` performs another `GET` for the actual cache lookup — resulting in 2 serial Redis round-trips on every cache-miss path and wasted latency on cache-hits too. Eliminating the speculative `GET` in the route layer and instead having `JobService` return a `(result, cache_hit: bool)` tuple reduces every request's Redis overhead by oninvalidmsms%skip2026-04-09T01:53:51.614266+00:00
LinkedIn Job Scraper4## Hypothesis **Creating a new `httpx.AsyncClient` for every upstream request incurs TCP connection establishment overhead on every call; reusing a single persistent client with connection pooling eliminates this cost.** Every method in `ApifyClient` calls `_build_client()` inside an `async with` block, which creates and immediately tears down an `httpx.AsyncClient` — meaning a fresh TCP (and TLS) handshake is required for each upstream request. By instantiating one shared `httpx.AsyncClient` invalidmsms%skip2026-04-09T01:53:07.843270+00:00
LinkedIn Job Scraper3## Hypothesis **The `get_run` and `get_dataset` routes perform a redundant double Redis `GET` — one eager check to set `is_cached` for the response envelope, then another inside `job_service.get_run/get_dataset` — adding an unnecessary serial Redis round-trip on every cache-miss path. Eliminating this by having the service layer return a `(result, cache_hit: bool)` tuple and using that flag directly will remove one Redis RTT from every non-cached request.** ### Explanation In `routes/jobs.py`invalidmsms%skip2026-04-09T01:52:32.878908+00:00
LinkedIn Job Scraper2## Hypothesis **The `get_dataset` and `get_run` routes perform two sequential Redis `GET` calls per request — one to check cache for the `is_cached` flag in the route handler, and another inside the service layer — doubling Redis round-trip latency on every cache hit path.** Eliminating the redundant preflight `GET` in the route handlers (which is only used to set the `cached: bool` field in the response) and instead threading that information back from the service layer will halve Redis operatinvalidmsms%skip2026-04-09T01:52:02.631198+00:00
LinkedIn Job Scraper1## Hypothesis **Reuse a single persistent `httpx.AsyncClient` instead of creating and tearing down a new client per request in `apify_client.py`.** Each call to `start_run`, `get_run`, or `get_dataset` currently executes `async with _build_client() as client:`, which creates a new `httpx.AsyncClient`, establishes a new TCP connection (including TLS handshake to `api.apify.com`), and destroys it after the call. Connection establishment alone adds 50–200ms per request. A persistent client with ainvalidmsms%skip2026-04-09T01:51:18.893875+00:00
LinkedIn Job Scraper0## Hypothesis **Reuse a single persistent `httpx.AsyncClient` instead of creating and tearing down a new client per request.** The current `_build_client()` pattern inside `async with` blocks creates a fresh TCP connection (including TLS handshake to `api.apify.com`) for every upstream call, adding 50–200ms of connection overhead to every request that misses cache. ### Expected Impact By maintaining a shared, connection-pooled `AsyncClient` as a module-level singleton (initialized at app starinvalidmsms%skip2026-04-09T01:50:33.914612+00:00