Revisions of python-Scrapy

buildservice-autocommit accepted request 1164153 from Factory Maintainer's avatar Factory Maintainer (factory-maintainer) (revision 40)
baserev update by copy to link target
buildservice-autocommit accepted request 1161494 from Dirk Mueller's avatar Dirk Mueller (dirkmueller) (revision 39)
baserev update by copy to link target
Dirk Mueller's avatar Dirk Mueller (dirkmueller) committed (revision 38)
- update to 2.11.1 (bsc#1220514, CVE-2024-1892, bsc#1221986):
     advisory`_ for more information. (bsc#1221986)
Dirk Mueller's avatar Dirk Mueller (dirkmueller) committed (revision 37)
- update to 2.11.1 (bsc#1220514, CVE-2024-1892):
  * Addressed `ReDoS vulnerabilities` (bsc#1220514, CVE-2024-1892)
    -  ``scrapy.utils.iterators.xmliter`` is now deprecated in favor of
       :func:`~scrapy.utils.iterators.xmliter_lxml`, which
       :class:`~scrapy.spiders.XMLFeedSpider` now uses.
       To minimize the impact of this change on existing code,
       :func:`~scrapy.utils.iterators.xmliter_lxml` now supports indicating
       the node namespace with a prefix in the node name, and big files with
       highly nested trees when using libxml2 2.7+.
    -  Fixed regular expressions in the implementation of the
       :func:`~scrapy.utils.response.open_in_browser` function.
      .. _ReDoS vulnerabilities: https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
  *  :setting:`DOWNLOAD_MAXSIZE` and :setting:`DOWNLOAD_WARNSIZE` now also apply
     to the decompressed response body. Please, see the `7j7m-v7m3-jqm7 security
     advisory`_ for more information.
     .. _7j7m-v7m3-jqm7 security advisory: https://github.com/scrapy/scrapy/security/advisories/GHSA-7j7m-v7m3-jqm7
  *  Also in relation with the `7j7m-v7m3-jqm7 security advisory`_, the
     deprecated ``scrapy.downloadermiddlewares.decompression`` module has been
     removed.
  *  The ``Authorization`` header is now dropped on redirects to a different
     domain. Please, see the `cw9j-q3vf-hrrv security advisory`_ for more
     information.
  *  The OS signal handling code was refactored to no longer use private Twisted
      functions. (:issue:`6024`, :issue:`6064`, :issue:`6112`)
  *  Improved documentation for :class:`~scrapy.crawler.Crawler` initialization
     changes made in the 2.11.0 release. (:issue:`6057`, :issue:`6147`)
  *  Extended documentation for :attr:`Request.meta <scrapy.http.Request.meta>`.
  *  Fixed the :reqmeta:`dont_merge_cookies` documentation. (:issue:`5936`,
  *  Added a link to Zyte's export guides to the :ref:`feed exports
  *  Added a missing note about backward-incompatible changes in
buildservice-autocommit accepted request 1137882 from Daniel Garcia's avatar Daniel Garcia (dgarcia) (revision 36)
baserev update by copy to link target
Daniel Garcia's avatar Daniel Garcia (dgarcia) committed (revision 35)
- Disable flaky test
Daniel Garcia's avatar Daniel Garcia (dgarcia) committed (revision 34)
- Add patch twisted-23.8.0-compat.patch gh#scrapy/scrapy#6064
- Update to 2.11.0:
  - Spiders can now modify settings in their from_crawler methods,
    e.g. based on spider arguments.
  - Periodic logging of stats.
  - Bug fixes.
- 2.10.0:
  - Added Python 3.12 support, dropped Python 3.7 support.
  - The new add-ons framework simplifies configuring 3rd-party
    components that support it.
  - Exceptions to retry can now be configured.
  - Many fixes and improvements for feed exports.
- 2.9.0:
  - Per-domain download settings.
  - Compatibility with new cryptography and new parsel.
  - JMESPath selectors from the new parsel.
  - Bug fixes.
- 2.8.0:
  - This is a maintenance release, with minor features, bug fixes, and
    cleanups.
buildservice-autocommit accepted request 1034478 from Markéta Machová's avatar Markéta Machová (mcalabkova) (revision 33)
baserev update by copy to link target
Markéta Machová's avatar Markéta Machová (mcalabkova) accepted request 1034369 from Yogalakshmi Arunachalam's avatar Yogalakshmi Arunachalam (yarunachalam) (revision 32)
- Update to v2.7.1 
  * Relaxed the restriction introduced in 2.6.2 so that the Proxy-Authentication header can again be set explicitly in certain cases,
    restoring compatibility with scrapy-zyte-smartproxy 2.1.0 and older
  Bug fixes
  * full change-log https://docs.scrapy.org/en/latest/news.html#scrapy-2-7-1-2022-11-02
buildservice-autocommit accepted request 1032071 from Matej Cepl's avatar Matej Cepl (mcepl) (revision 31)
baserev update by copy to link target
Matej Cepl's avatar Matej Cepl (mcepl) accepted request 1031641 from Yogalakshmi Arunachalam's avatar Yogalakshmi Arunachalam (yarunachalam) (revision 30)
- Update to v2.7.0 
  Highlights:
  * Added Python 3.11 support, dropped Python 3.6 support
  * Improved support for :ref:`asynchronous callbacks <topics-coroutines>`
  * :ref:`Asyncio support <using-asyncio>` is enabled by default on new projects
  * Output names of item fields can now be arbitrary strings
  * Centralized :ref:`request fingerprinting <request-fingerprints>` configuration is now possible
  Modified requirements
  * Python 3.7 or greater is now required; support for Python 3.6 has been dropped. Support for the upcoming Python 3.11 has been added.
    The minimum required version of some dependencies has changed as well:
    - lxml: 3.5.0 → 4.3.0
    - Pillow (:ref:`images pipeline <images-pipeline>`): 4.0.0 → 7.1.0
    - zope.interface: 5.0.0 → 5.1.0
    (:issue:`5512`, :issue:`5514`, :issue:`5524`, :issue:`5563`, :issue:`5664`, :issue:`5670`, :issue:`5678`)
  Deprecations
    - :meth:`ImagesPipeline.thumb_path <scrapy.pipelines.images.ImagesPipeline.thumb_path>` must now accept an item parameter (:issue:`5504`, :issue:`5508`).
    - The scrapy.downloadermiddlewares.decompression module is now deprecated (:issue:`5546`, :issue:`5547`).
  
  Complete changelog https://github.com/scrapy/scrapy/blob/2.7/docs/news.rst
buildservice-autocommit accepted request 1002736 from Dirk Mueller's avatar Dirk Mueller (dirkmueller) (revision 29)
baserev update by copy to link target
Dirk Mueller's avatar Dirk Mueller (dirkmueller) accepted request 1002338 from Yogalakshmi Arunachalam's avatar Yogalakshmi Arunachalam (yarunachalam) (revision 28)
- Update to v2.6.2 
  Security bug fix:
  * When HttpProxyMiddleware processes a request with proxy metadata, and that proxy metadata includes proxy credentials,
    HttpProxyMiddleware sets the Proxy-Authentication header, but only if that header is not already set.
  * There are third-party proxy-rotation downloader middlewares that set different proxy metadata every time they process a request.
  * Because of request retries and redirects, the same request can be processed by downloader middlewares more than once,
    including both HttpProxyMiddleware and any third-party proxy-rotation downloader middleware.
  * These third-party proxy-rotation downloader middlewares could change the proxy metadata of a request to a new value,
    but fail to remove the Proxy-Authentication header from the previous value of the proxy metadata, causing the credentials of one
    proxy to be sent to a different proxy.
  * To prevent the unintended leaking of proxy credentials, the behavior of HttpProxyMiddleware is now as follows when processing a request:
    + If the request being processed defines proxy metadata that includes credentials, the Proxy-Authorization header is always updated 
    to feature those credentials.
    + If the request being processed defines proxy metadata without credentials, the Proxy-Authorization header is removed unless
    it was originally defined for the same proxy URL.
    + To remove proxy credentials while keeping the same proxy URL, remove the Proxy-Authorization header.
    + If the request has no proxy metadata, or that metadata is a falsy value (e.g. None), the Proxy-Authorization header is removed.
    + It is no longer possible to set a proxy URL through the proxy metadata but set the credentials through the Proxy-Authorization header.
    Set proxy credentials through the proxy metadata instead.
  * Also fixes the following regressions introduced in 2.6.0:
    + CrawlerProcess supports again crawling multiple spiders (issue 5435, issue 5436)
    + Installing a Twisted reactor before Scrapy does (e.g. importing twisted.internet.reactor somewhere at the module level)
    no longer prevents Scrapy from starting, as long as a different reactor is not specified in TWISTED_REACTOR (issue 5525, issue 5528)
    + Fixed an exception that was being logged after the spider finished under certain conditions (issue 5437, issue 5440)
    + The --output/-o command-line parameter supports again a value starting with a hyphen (issue 5444, issue 5445)
    + The scrapy parse -h command no longer throws an error (issue 5481, issue 5482)
buildservice-autocommit accepted request 959733 from Dirk Mueller's avatar Dirk Mueller (dirkmueller) (revision 27)
baserev update by copy to link target
Dirk Mueller's avatar Dirk Mueller (dirkmueller) accepted request 959304 from Benjamin Greiner's avatar Benjamin Greiner (bnavigator) (revision 26)
- Update runtime requirements and test deselections
buildservice-autocommit accepted request 958587 from Matej Cepl's avatar Matej Cepl (mcepl) (revision 25)
baserev update by copy to link target
Matej Cepl's avatar Matej Cepl (mcepl) committed (revision 24)
Fix changelogs
Matej Cepl's avatar Matej Cepl (mcepl) committed (revision 23)
- Upgrade to 2.6.1:
- Remove unnecessary patches:
  - remove-h2-version-restriction.patch
  - add-peak-method-to-queues.patch
Matej Cepl's avatar Matej Cepl (mcepl) accepted request 946843 from Benjamin Greiner's avatar Benjamin Greiner (bnavigator) (revision 22)
- Skip a failing test in python310: exception format not recognized
Matej Cepl's avatar Matej Cepl (mcepl) accepted request 923811 from Benjamin Greiner's avatar Benjamin Greiner (bnavigator) (revision 21)
- Update to 2.5.1, Security bug fix
  * boo#1191446, CVE-2021-41125
  * If you use HttpAuthMiddleware (i.e. the http_user and
    http_pass spider attributes) for HTTP authentication,
    any request exposes your credentials to the request
    target.
  * To prevent unintended exposure of authentication
    credentials to unintended domains, you must now
    additionally set a new, additional spider attribute,
    http_auth_domain, and point it to the specific domain to
    which the authentication credentials must be sent.
  * If the http_auth_domain spider attribute is not set, the
    domain of the first request will be considered the HTTP
    authentication target, and authentication credentials
    will only be sent in requests targeting that domain.
  * If you need to send the same HTTP authentication
    credentials to multiple domains, you can use
    w3lib.http.basic_auth_header instead to set the value of
    the Authorization header of your requests.
  * If you really want your spider to send the same HTTP
    authentication credentials to any domain, set the
    http_auth_domain spider attribute to None.
  * Finally, if you are a user of scrapy-splash, know that
    this version of Scrapy breaks compatibility with
    scrapy-splash 0.7.2 and earlier. You will need to upgrade
    scrapy-splash to a greater version for it to continue to
    work.
Displaying revisions 1 - 20 of 40
openSUSE Build Service is sponsored by