PageSpeed Insights Score Dropped? Here's What to Check
TL;DR: A sudden drop in PageSpeed score is usually caused by something that recently changed: a new script, larger images, updated dependencies, or third-party services degrading. Compare your current metrics to historical data to identify which specific metric dropped (LCP, CLS, or INP), then investigate recent changes that could affect that metric. Most drops are fixable once you identify the cause.
Your PageSpeed score was a comfortable 85, and now it's 62. Google's green checkmarks turned orange, and you're worried about the impact on your search rankings. Before you panic, let's systematically figure out what happened and how to fix it.
What Causes PageSpeed Drops?
PageSpeed Insights measures how quickly your page loads and becomes interactive, plus how stable the layout is during loading. A drop in score means one or more of the underlying metrics got worse. The three Core Web Vitals that matter most are Largest Contentful Paint (LCP) measuring loading performance, Cumulative Layout Shift (CLS) measuring visual stability, and Interaction to Next Paint (INP) measuring responsiveness. A significant change to any of these causes your overall score to drop.
Step 1: Identify Which Metrics Dropped
Run PageSpeed Insights on your affected page and look at the individual metric scores, not just the overall number. The diagnostic section shows exactly which metrics are in the red, orange, or green zones.
Each metric points to different types of problems.
LCP problems usually involve large images, slow server response, render-blocking resources, or slow-loading web fonts.
CLS problems involve elements that shift after the page starts rendering, like images without dimensions, ads loading late, or fonts swapping.
INP problems involve JavaScript that blocks the main thread, making the page feel unresponsive when users try to interact.
Knowing which metric dropped narrows your investigation considerably.
Step 2: Compare to Historical Data
If you have historical PageSpeed data from performance monitoring tools, compare your current metrics to recent history. Look for the date when scores changed.
This date helps you correlate the drop with specific changes: deployments, plugin updates, content additions, or third-party service changes.
If you don't have historical data, start monitoring now. Future problems will be much easier to diagnose when you can see exactly when metrics changed.
Step 3: Review Recent Changes
Think about what changed on your site around when the score dropped.
Code Deployments
Did you deploy new code? New JavaScript libraries, analytics integrations, or feature code can add weight and processing time. Check your git history for recent merges.
Look specifically for new third-party scripts, changes to image handling, new CSS frameworks or components, JavaScript bundle size changes, and new fonts or font loading approaches.
CMS and Plugin Updates
If you're using WordPress, Drupal, or another CMS, check for recent automatic updates. Plugin updates sometimes introduce performance regressions.
Check your plugins list for recently updated dates. If an update coincides with your score drop, try reverting that plugin as a test.
Content Changes
New content with unoptimized images is a common culprit. A single 5MB hero image can tank your LCP score.
Check recently published or updated pages for large images without proper sizing, embedded videos loading eagerly instead of lazily, and complex animations or interactive elements.
Third-Party Services
Services you embed (analytics, chat widgets, advertising, social media integrations) can degrade without any changes on your end.
Did you add new integrations? Did existing integrations release updates? Are third-party CDNs experiencing slowdowns?
Use your browser's Network tab to see if third-party resources are loading slowly.
Common Causes and Fixes
Large or Unoptimized Images (LCP)
Images are the most common cause of LCP problems. Check the LCP element that PageSpeed identifies and ask: Is this image optimized for web? Is it properly sized for its display dimensions? Is it using modern formats like WebP or AVIF? Is it loading early enough?
Fixes include compressing images, using responsive images with srcset, converting to WebP format, and preloading your LCP image:
<link rel="preload" as="image" href="/hero-image.webp">
Render-Blocking Resources (LCP)
CSS and JavaScript in your document head can block rendering until they're loaded and processed.
Check for large CSS files that could be split or loaded conditionally, JavaScript that could be deferred or loaded asynchronously, and resources from slow third-party domains.
Add defer or async to scripts that don't need to run immediately:
<script src="analytics.js" defer></script>
Slow Server Response (LCP)
If your Time to First Byte (TTFB) is high, everything else is delayed. Server response should be under 200ms for good LCP scores.
Check for database queries that have become slow, missing caching (page cache, query cache, object cache), server resource constraints, and slow hosting or infrastructure issues.
Missing Image Dimensions (CLS)
Images without explicit width and height cause layout shift when they load. The browser doesn't know how much space to reserve, so content jumps when the image appears.
Always specify dimensions:
<img src="photo.jpg" width="800" height="600" alt="Description">
Or use CSS aspect-ratio:
img {
aspect-ratio: 4/3;
width: 100%;
}
Web Fonts Causing Flash (CLS)
When custom fonts load, text may shift from the fallback font to the custom font, causing layout shift.
Use font-display: swap or font-display: optional to control this behavior. Preload critical fonts:
<link rel="preload" href="/fonts/custom.woff2" as="font" type="font/woff2" crossorigin>
Ads and Dynamic Content (CLS)
Advertisements and other dynamically loaded content are major CLS offenders. When they load, they push other content around.
Reserve space for dynamic content with explicit dimensions. For ads, use the slot dimensions your ad network specifies rather than letting ads determine their own size.
Heavy JavaScript (INP)
Long JavaScript tasks block the main thread, making the page feel unresponsive. If users click or tap and nothing happens for hundreds of milliseconds, INP suffers.
Use browser DevTools Performance tab to identify long tasks. Break large scripts into smaller chunks. Consider using web workers for CPU-intensive operations.
Third-Party Script Bloat
Analytics, chat widgets, social buttons, and other third-party scripts add up. Each one loads JavaScript, potentially blocks the main thread, and may load additional resources.
Audit your third-party scripts. Remove any you're not actively using. Load non-critical scripts with defer or after page load. Consider using facade patterns for heavy widgets (show a static placeholder until the user interacts).
Testing Your Fixes
After making changes, run PageSpeed Insights again. Note that scores can vary between runs, especially for the field data portion which reflects real user experiences over time.
For reliable testing, run the lab test multiple times and average the results, test from the same location and device type, and test during similar traffic conditions.
Changes to field data take time to reflect as they're based on actual user sessions over the past 28 days.
When Scores Drop Without Changes
Sometimes scores drop even when you haven't changed anything.
PageSpeed's thresholds and scoring weights occasionally change. Google updates what constitutes "good" performance. What was acceptable last month might not be this month.
Field data fluctuates based on your actual visitors. If you suddenly got traffic from users with slower connections or older devices, field metrics decline.
Third-party services can degrade. Even if you haven't changed your code, the services you depend on might be slower.
Chrome and other browsers receive updates that can change how pages are measured. New capabilities or stricter requirements can affect scores.
Preventing Future Drops
Set up PageSpeed monitoring to track your scores over time. When you have historical data, you can catch drops quickly and correlate them with specific changes.
Include performance testing in your development workflow. Run PageSpeed checks before deploying significant changes. Performance budgets help prevent regressions.
Monitor third-party script performance separately. Services like RequestMetrics or SpeedCurve can track how much third-party scripts contribute to your load times.
Keep dependencies updated but test thoroughly. Framework and library updates usually improve performance, but occasionally they don't.
How SecurityBot Helps
SecurityBot's PageSpeed monitoring tracks your Core Web Vitals over time and alerts you when scores drop. Instead of discovering problems when someone complains or when you happen to run a test, you get notified automatically.
Historical tracking makes diagnosis easier because you can see exactly when metrics changed and correlate with other monitoring data (like SSL changes, downtime events, or content updates).
Start your free 14-day trial and track your PageSpeed scores alongside security monitoring.
Frequently Asked Questions
How much do PageSpeed scores affect SEO?
Core Web Vitals are a ranking signal, but content relevance and authority are still more important. A site with great content and mediocre performance will likely outrank a site with perfect performance and thin content. That said, performance improvements help with user experience and can provide a ranking edge when competing with similar content.
Why do my mobile and desktop scores differ so much?
Mobile scores are usually lower because the test simulates slower network conditions (4G) and less powerful processors. Most real mobile users face these constraints, so the mobile score reflects actual mobile experience. Focus on mobile first since that's where most users likely are.
Should I optimize for lab data or field data?
Optimize for both, but prioritize field data. Lab data tells you about potential performance, but field data reflects actual user experience. If your lab scores are good but field scores are bad, real users on real devices are having problems that lab tests don't catch.
How quickly do field data scores update?
Field data is aggregated from real Chrome users over the past 28 days. Changes you make today won't fully reflect in field data for about a month. Lab scores update immediately after you make changes.
Is a score of 90+ necessary?
Scores in the green zone (90+) are ideal, but orange zone scores (50-89) aren't catastrophic. Focus on getting Core Web Vitals into the "good" threshold rather than chasing perfect scores. A page with 85 that meets all Core Web Vitals thresholds is in good shape.
Last updated: January 2026 | Written by Jason Gilmore, Founder of SecurityBot