Home / Blog / Optimising INP: where JavaScript actually hurts you

Optimising INP: where JavaScript actually hurts you

INP replaced FID in Core Web Vitals in 2024 and it's a tougher metric. I worked through how JavaScript actually drives INP while bringing real projects from 500ms to under 200ms.

Google retired First Input Delay from Core Web Vitals in 2024 and replaced it with Interaction to Next Paint. That change woke a lot of products up. FID only measured the first input, INP measures the 75th percentile of all interaction delays across the session. It’s a much more honest metric, and a harder one to pass.

The “Good” threshold for INP is 200ms. On a client project I dropped p75 INP from 580ms to 190ms over three weeks of intense JavaScript work. Here’s what I learned.

What INP measures, and why it moves

INP measures this chain:

  1. The user triggers an event (tap, click, key press)
  2. The browser runs the event handler (JS execution)
  3. The browser does layout/paint
  4. Time to the next paint = the INP value

Wherever this chain stretches, INP gets worse. Three main culprits: long JS in the handler, too many side-effects per handler (state updates, re-renders), and layout work that blows the frame budget.

Identify long tasks

Chrome DevTools Performance panel is the best place to understand INP. Record while you simulate user actions, and look for long tasks.

A “long task” is anything over 50ms on the main thread. Every long task feeds into INP. You can catch them in production with the Performance Observer API:

new PerformanceObserver((list) => {
    list.getEntries().forEach((entry) => {
        console.log('Long task:', entry.duration, entry.attribution);
    });
}).observe({entryTypes: ['longtask']});

Break up event handlers

The most common offender: doing big synchronous work inside the handler. For example:

button.addEventListener('click', () => {
    const filtered = bigList.filter(complexPredicate);
    const sorted = filtered.sort(expensiveComparator);
    renderList(sorted);
});

That handler might run 300ms. Your INP is 300+. The fix: break the work into chunks with a yield point between each chunk so the browser can paint.

button.addEventListener('click', async () => {
    showLoadingState();
    await new Promise(r => setTimeout(r, 0)); // yield
    const filtered = bigList.filter(complexPredicate);
    await new Promise(r => setTimeout(r, 0));
    const sorted = filtered.sort(expensiveComparator);
    renderList(sorted);
});

The first paint after the event (showing the loading state) lands instantly, the rest happens in the background.

Modern alternative: scheduler.yield(), available from Chrome 116. It explicitly hands control back to paint.

requestIdleCallback vs requestAnimationFrame

Different work, different scheduling:

  • requestAnimationFrame: UI updates, work that should land on a paint frame. Runs right before the next paint.
  • requestIdleCallback: work for idle time. Background computation while the user isn’t interacting.

For INP, requestIdleCallback is gold. Analytics pushes, prefetches, cache warming, anything that’s “should happen, but not now”, goes here.

Third-party script impact

Third-party scripts are the quiet INP killer. Google Tag Manager, chat widgets, A/B testing SDKs all block the main thread.

The moves that work best: load them defer or async, sandbox them in iframes (Partytown moves GTM to a web worker), and defer anything that can wait until after first interaction.

On one project, delaying the chat widget by the first 10 seconds improved p75 INP by 150ms.

React: the re-render trap

In React apps, the biggest source of INP is unnecessary re-rendering. A click triggers a state update, the component tree re-renders, and if 60+ components render you burn 100ms easily.

Strategies that help:

  • React.memo: don’t re-render components whose props haven’t changed
  • useMemo / useCallback: expensive computation and referential equality
  • useTransition: mark non-urgent state updates as low priority
  • useDeferredValue: keep the input value fresh while the derived expensive computation defers

React 18 Concurrent Mode was designed for this. On migrated projects I’ve seen INP improve 40%.

CSS selector cost

On large DOMs, complex CSS selectors blow up style recalc. Selectors like [data-x] > * + *:not(.hidden) ~ .foo can add 20 to 50ms per click.

Simplify selectors, isolate with CSS layers, and use contain: layout paint to narrow the layout scope.

Web Worker: the only real answer for big work

Some work is fundamentally CPU-heavy: large dataset filtering, image processing, complex calculation. Whatever you do on the main thread, INP will suffer.

Move it to a Web Worker. Libraries like Comlink give you an ergonomic RPC shape.

const worker = new Worker('/processor.js');
button.addEventListener('click', async () => {
    const result = await worker.postMessageAsync({type: 'filter', data});
    render(result);
});

Main thread stays free, INP stays clean.

How I collect INP field data

The CrUX report gives you aggregate data, but real-user monitoring is what you need for your own users. The web-vitals library (Google’s official) captures INP along with the other core vitals:

import {onINP} from 'web-vitals';
onINP((metric) => {
    sendToAnalytics({
        name: metric.name,
        value: metric.value,
        attribution: metric.attribution,
    });
});

The metric.attribution field tells you which element and what duration triggered INP, so you can see the exact hotspots to optimise.

Final reminder

INP optimisation isn’t a one-off like LCP. Every new feature brings regression risk. Synthetic INP testing in CI is hard but doable: Chrome headless + Lighthouse + Puppeteer can measure INP on critical user flows and fail builds under threshold.

Without that discipline, INP balloons again 6 months later and you’re back in maintenance cycles.

Have a project on this topic?

Leave a brief summary — I’ll get back to you within 24 hours.

Get in touch