We have some news to share for the request index beta feature. We’ve added more options to sort your requests, counters to the individual filters and documentation for the search functionality. Checkout the blog post for more details.

File CVE-2025-66418.patch of Package python-urllib3_1

From 24d7b67eac89f94e11003424bcf0d8f7b72222a8 Mon Sep 17 00:00:00 2001
From: Illia Volochii <illia.volochii@gmail.com>
Date: Fri, 5 Dec 2025 16:41:33 +0200
Subject: [PATCH] Merge commit from fork

* Add a hard-coded limit for the decompression chain

* Reuse new list
---
 changelog/GHSA-gm62-xv2j-4w53.security.rst |  4 ++++
 src/urllib3/response.py                    | 12 +++++++++++-
 test/test_response.py                      | 10 ++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)
 create mode 100644 changelog/GHSA-gm62-xv2j-4w53.security.rst

Index: urllib3-1.26.20/changelog/GHSA-gm62-xv2j-4w53.security.rst
===================================================================
--- /dev/null
+++ urllib3-1.26.20/changelog/GHSA-gm62-xv2j-4w53.security.rst
@@ -0,0 +1,4 @@
+Fixed a security issue where an attacker could compose an HTTP response with
+virtually unlimited links in the ``Content-Encoding`` header, potentially
+leading to a denial of service (DoS) attack by exhausting system resources
+during decoding. The number of allowed chained encodings is now limited to 5.
Index: urllib3-1.26.20/src/urllib3/response.py
===================================================================
--- urllib3-1.26.20.orig/src/urllib3/response.py
+++ urllib3-1.26.20/src/urllib3/response.py
@@ -225,8 +225,18 @@ class MultiDecoder(object):
         they were applied.
     """
 
-    def __init__(self, modes):
-        self._decoders = [_get_decoder(m.strip()) for m in modes.split(",")]
+    # Maximum allowed number of chained HTTP encodings in the
+    # Content-Encoding header.
+    max_decode_links = 5
+
+    def __init__(self, modes: str) -> None:
+        encodings = [m.strip() for m in modes.split(",")]
+        if len(encodings) > self.max_decode_links:
+            raise DecodeError(
+                "Too many content encodings in the chain: "
+                f"{len(encodings)} > {self.max_decode_links}"
+            )
+        self._decoders = [_get_decoder(e) for e in encodings]
 
     def flush(self):
         return self._decoders[0].flush()
Index: urllib3-1.26.20/test/test_response.py
===================================================================
--- urllib3-1.26.20.orig/test/test_response.py
+++ urllib3-1.26.20/test/test_response.py
@@ -477,6 +477,16 @@ class TestResponse(object):
 
         assert r.data == b"foo"
 
+    def test_read_multi_decoding_too_many_links(self):
+        fp = BytesIO(b"foo")
+        with pytest.raises(
+            DecodeError, match="Too many content encodings in the chain: 6 > 5"
+        ):
+            HTTPResponse(
+                fp,
+                headers={"content-encoding": "gzip, deflate, br, zstd, gzip, deflate"},
+            )
+
     def test_body_blob(self):
         resp = HTTPResponse(b"foo")
         assert resp.data == b"foo"
openSUSE Build Service is sponsored by