fix vendor
This commit is contained in:
parent
4963339b34
commit
731ddbc133
1
vendor/contextlib2-0.6.0.post1.dist-info/INSTALLER
vendored
Normal file
1
vendor/contextlib2-0.6.0.post1.dist-info/INSTALLER
vendored
Normal file
@ -0,0 +1 @@
|
||||
pip
|
122
vendor/contextlib2-0.6.0.post1.dist-info/LICENSE.txt
vendored
Normal file
122
vendor/contextlib2-0.6.0.post1.dist-info/LICENSE.txt
vendored
Normal file
@ -0,0 +1,122 @@
|
||||
|
||||
|
||||
A. HISTORY OF THE SOFTWARE
|
||||
==========================
|
||||
|
||||
contextlib2 is a derivative of the contextlib module distributed by the PSF
|
||||
as part of the Python standard library. According, it is itself redistributed
|
||||
under the PSF license (reproduced in full below). As the contextlib module
|
||||
was added only in Python 2.5, the licenses for earlier Python versions are
|
||||
not applicable and have not been included.
|
||||
|
||||
Python was created in the early 1990s by Guido van Rossum at Stichting
|
||||
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
|
||||
as a successor of a language called ABC. Guido remains Python's
|
||||
principal author, although it includes many contributions from others.
|
||||
|
||||
In 1995, Guido continued his work on Python at the Corporation for
|
||||
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
|
||||
in Reston, Virginia where he released several versions of the
|
||||
software.
|
||||
|
||||
In May 2000, Guido and the Python core development team moved to
|
||||
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
|
||||
year, the PythonLabs team moved to Digital Creations (now Zope
|
||||
Corporation, see http://www.zope.com). In 2001, the Python Software
|
||||
Foundation (PSF, see http://www.python.org/psf/) was formed, a
|
||||
non-profit organization created specifically to own Python-related
|
||||
Intellectual Property. Zope Corporation is a sponsoring member of
|
||||
the PSF.
|
||||
|
||||
All Python releases are Open Source (see http://www.opensource.org for
|
||||
the Open Source Definition). Historically, most, but not all, Python
|
||||
releases have also been GPL-compatible; the table below summarizes
|
||||
the various releases that included the contextlib module.
|
||||
|
||||
Release Derived Year Owner GPL-
|
||||
from compatible? (1)
|
||||
|
||||
2.5 2.4 2006 PSF yes
|
||||
2.5.1 2.5 2007 PSF yes
|
||||
2.5.2 2.5.1 2008 PSF yes
|
||||
2.5.3 2.5.2 2008 PSF yes
|
||||
2.6 2.5 2008 PSF yes
|
||||
2.6.1 2.6 2008 PSF yes
|
||||
2.6.2 2.6.1 2009 PSF yes
|
||||
2.6.3 2.6.2 2009 PSF yes
|
||||
2.6.4 2.6.3 2009 PSF yes
|
||||
2.6.5 2.6.4 2010 PSF yes
|
||||
3.0 2.6 2008 PSF yes
|
||||
3.0.1 3.0 2009 PSF yes
|
||||
3.1 3.0.1 2009 PSF yes
|
||||
3.1.1 3.1 2009 PSF yes
|
||||
3.1.2 3.1.1 2010 PSF yes
|
||||
3.1.3 3.1.2 2010 PSF yes
|
||||
3.1.4 3.1.3 2011 PSF yes
|
||||
3.2 3.1 2011 PSF yes
|
||||
3.2.1 3.2 2011 PSF yes
|
||||
3.2.2 3.2.1 2011 PSF yes
|
||||
3.3 3.2 2012 PSF yes
|
||||
|
||||
Footnotes:
|
||||
|
||||
(1) GPL-compatible doesn't mean that we're distributing Python under
|
||||
the GPL. All Python licenses, unlike the GPL, let you distribute
|
||||
a modified version without making your changes open source. The
|
||||
GPL-compatible licenses make it possible to combine Python with
|
||||
other software that is released under the GPL; the others don't.
|
||||
|
||||
Thanks to the many outside volunteers who have worked under Guido's
|
||||
direction to make these releases possible.
|
||||
|
||||
|
||||
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
|
||||
===============================================================
|
||||
|
||||
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
|
||||
--------------------------------------------
|
||||
|
||||
1. This LICENSE AGREEMENT is between the Python Software Foundation
|
||||
("PSF"), and the Individual or Organization ("Licensee") accessing and
|
||||
otherwise using this software ("Python") in source or binary form and
|
||||
its associated documentation.
|
||||
|
||||
2. Subject to the terms and conditions of this License Agreement, PSF hereby
|
||||
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
|
||||
analyze, test, perform and/or display publicly, prepare derivative works,
|
||||
distribute, and otherwise use Python alone or in any derivative version,
|
||||
provided, however, that PSF's License Agreement and PSF's notice of copyright,
|
||||
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
|
||||
2011 Python Software Foundation; All Rights Reserved" are retained in Python
|
||||
alone or in any derivative version prepared by Licensee.
|
||||
|
||||
3. In the event Licensee prepares a derivative work that is based on
|
||||
or incorporates Python or any part thereof, and wants to make
|
||||
the derivative work available to others as provided herein, then
|
||||
Licensee hereby agrees to include in any such work a brief summary of
|
||||
the changes made to Python.
|
||||
|
||||
4. PSF is making Python available to Licensee on an "AS IS"
|
||||
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
|
||||
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
|
||||
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
|
||||
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
6. This License Agreement will automatically terminate upon a material
|
||||
breach of its terms and conditions.
|
||||
|
||||
7. Nothing in this License Agreement shall be deemed to create any
|
||||
relationship of agency, partnership, or joint venture between PSF and
|
||||
Licensee. This License Agreement does not grant permission to use PSF
|
||||
trademarks or trade name in a trademark sense to endorse or promote
|
||||
products or services of Licensee, or any third party.
|
||||
|
||||
8. By copying, installing or otherwise using Python, Licensee
|
||||
agrees to be bound by the terms and conditions of this License
|
||||
Agreement.
|
70
vendor/contextlib2-0.6.0.post1.dist-info/METADATA
vendored
Normal file
70
vendor/contextlib2-0.6.0.post1.dist-info/METADATA
vendored
Normal file
@ -0,0 +1,70 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: contextlib2
|
||||
Version: 0.6.0.post1
|
||||
Summary: Backports and enhancements for the contextlib module
|
||||
Home-page: http://contextlib2.readthedocs.org
|
||||
Author: Nick Coghlan
|
||||
Author-email: ncoghlan@gmail.com
|
||||
License: PSF License
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: License :: OSI Approved :: Python Software Foundation License
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.4
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
|
||||
|
||||
.. image:: https://jazzband.co/static/img/badge.svg
|
||||
:target: https://jazzband.co/
|
||||
:alt: Jazzband
|
||||
|
||||
.. image:: https://readthedocs.org/projects/contextlib2/badge/?version=latest
|
||||
:target: https://contextlib2.readthedocs.org/
|
||||
:alt: Latest Docs
|
||||
|
||||
.. image:: https://img.shields.io/travis/jazzband/contextlib2/master.svg
|
||||
:target: http://travis-ci.org/jazzband/contextlib2
|
||||
|
||||
.. image:: https://coveralls.io/repos/github/jazzband/contextlib2/badge.svg?branch=master
|
||||
:target: https://coveralls.io/github/jazzband/contextlib2?branch=master
|
||||
|
||||
.. image:: https://landscape.io/github/jazzband/contextlib2/master/landscape.svg
|
||||
:target: https://landscape.io/github/jazzband/contextlib2/
|
||||
|
||||
contextlib2 is a backport of the `standard library's contextlib
|
||||
module <https://docs.python.org/3.5/library/contextlib.html>`_ to
|
||||
earlier Python versions.
|
||||
|
||||
It also serves as a real world proving ground for possible future
|
||||
enhancements to the standard library version.
|
||||
|
||||
Development
|
||||
-----------
|
||||
|
||||
contextlib2 has no runtime dependencies, but requires ``unittest2`` for testing
|
||||
on Python 2.x, as well as ``setuptools`` and ``wheel`` to generate universal
|
||||
wheel archives.
|
||||
|
||||
Local testing is just a matter of running ``python test_contextlib2.py``.
|
||||
|
||||
You can test against multiple versions of Python with
|
||||
`tox <https://tox.testrun.org/>`_::
|
||||
|
||||
pip install tox
|
||||
tox
|
||||
|
||||
Versions currently tested in both tox and Travis CI are:
|
||||
|
||||
* CPython 2.7
|
||||
* CPython 3.4
|
||||
* CPython 3.5
|
||||
* CPython 3.6
|
||||
* CPython 3.7
|
||||
* PyPy
|
||||
* PyPy3
|
||||
|
||||
|
8
vendor/contextlib2-0.6.0.post1.dist-info/RECORD
vendored
Normal file
8
vendor/contextlib2-0.6.0.post1.dist-info/RECORD
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
contextlib2-0.6.0.post1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
contextlib2-0.6.0.post1.dist-info/LICENSE.txt,sha256=xqev-sas2tLS3YfS12hDhiSraSYY2x8CvqOxHT85ePA,6054
|
||||
contextlib2-0.6.0.post1.dist-info/METADATA,sha256=_kBcf3VJkbe-EMyAM1c5t5sRwBFfFu5YcfWCJMgVO1Q,2297
|
||||
contextlib2-0.6.0.post1.dist-info/RECORD,,
|
||||
contextlib2-0.6.0.post1.dist-info/WHEEL,sha256=8zNYZbwQSXoB9IfXOjPfeNwvAsALAjffgk27FqvCWbo,110
|
||||
contextlib2-0.6.0.post1.dist-info/top_level.txt,sha256=RxWWBMkHA_rsw1laXJ8L3yE_fyYaBmvt2bVUvj3WbMg,12
|
||||
contextlib2.py,sha256=5HjGflUzwWAUfcILhSmC2GqvoYdZZzFzVfIDztHigUs,16915
|
||||
contextlib2.pyc,,
|
6
vendor/contextlib2-0.6.0.post1.dist-info/WHEEL
vendored
Normal file
6
vendor/contextlib2-0.6.0.post1.dist-info/WHEEL
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.33.6)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
1
vendor/contextlib2-0.6.0.post1.dist-info/top_level.txt
vendored
Normal file
1
vendor/contextlib2-0.6.0.post1.dist-info/top_level.txt
vendored
Normal file
@ -0,0 +1 @@
|
||||
contextlib2
|
518
vendor/contextlib2.py
vendored
Normal file
518
vendor/contextlib2.py
vendored
Normal file
@ -0,0 +1,518 @@
|
||||
"""contextlib2 - backports and enhancements to the contextlib module"""
|
||||
|
||||
import abc
|
||||
import sys
|
||||
import warnings
|
||||
from collections import deque
|
||||
from functools import wraps
|
||||
|
||||
__all__ = ["contextmanager", "closing", "nullcontext",
|
||||
"AbstractContextManager",
|
||||
"ContextDecorator", "ExitStack",
|
||||
"redirect_stdout", "redirect_stderr", "suppress"]
|
||||
|
||||
# Backwards compatibility
|
||||
__all__ += ["ContextStack"]
|
||||
|
||||
|
||||
# Backport abc.ABC
|
||||
if sys.version_info[:2] >= (3, 4):
|
||||
_abc_ABC = abc.ABC
|
||||
else:
|
||||
_abc_ABC = abc.ABCMeta('ABC', (object,), {'__slots__': ()})
|
||||
|
||||
|
||||
# Backport classic class MRO
|
||||
def _classic_mro(C, result):
|
||||
if C in result:
|
||||
return
|
||||
result.append(C)
|
||||
for B in C.__bases__:
|
||||
_classic_mro(B, result)
|
||||
return result
|
||||
|
||||
|
||||
# Backport _collections_abc._check_methods
|
||||
def _check_methods(C, *methods):
|
||||
try:
|
||||
mro = C.__mro__
|
||||
except AttributeError:
|
||||
mro = tuple(_classic_mro(C, []))
|
||||
|
||||
for method in methods:
|
||||
for B in mro:
|
||||
if method in B.__dict__:
|
||||
if B.__dict__[method] is None:
|
||||
return NotImplemented
|
||||
break
|
||||
else:
|
||||
return NotImplemented
|
||||
return True
|
||||
|
||||
|
||||
class AbstractContextManager(_abc_ABC):
|
||||
"""An abstract base class for context managers."""
|
||||
|
||||
def __enter__(self):
|
||||
"""Return `self` upon entering the runtime context."""
|
||||
return self
|
||||
|
||||
@abc.abstractmethod
|
||||
def __exit__(self, exc_type, exc_value, traceback):
|
||||
"""Raise any exception triggered within the runtime context."""
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def __subclasshook__(cls, C):
|
||||
"""Check whether subclass is considered a subclass of this ABC."""
|
||||
if cls is AbstractContextManager:
|
||||
return _check_methods(C, "__enter__", "__exit__")
|
||||
return NotImplemented
|
||||
|
||||
|
||||
class ContextDecorator(object):
|
||||
"""A base class or mixin that enables context managers to work as decorators."""
|
||||
|
||||
def refresh_cm(self):
|
||||
"""Returns the context manager used to actually wrap the call to the
|
||||
decorated function.
|
||||
|
||||
The default implementation just returns *self*.
|
||||
|
||||
Overriding this method allows otherwise one-shot context managers
|
||||
like _GeneratorContextManager to support use as decorators via
|
||||
implicit recreation.
|
||||
|
||||
DEPRECATED: refresh_cm was never added to the standard library's
|
||||
ContextDecorator API
|
||||
"""
|
||||
warnings.warn("refresh_cm was never added to the standard library",
|
||||
DeprecationWarning)
|
||||
return self._recreate_cm()
|
||||
|
||||
def _recreate_cm(self):
|
||||
"""Return a recreated instance of self.
|
||||
|
||||
Allows an otherwise one-shot context manager like
|
||||
_GeneratorContextManager to support use as
|
||||
a decorator via implicit recreation.
|
||||
|
||||
This is a private interface just for _GeneratorContextManager.
|
||||
See issue #11647 for details.
|
||||
"""
|
||||
return self
|
||||
|
||||
def __call__(self, func):
|
||||
@wraps(func)
|
||||
def inner(*args, **kwds):
|
||||
with self._recreate_cm():
|
||||
return func(*args, **kwds)
|
||||
return inner
|
||||
|
||||
|
||||
class _GeneratorContextManager(ContextDecorator):
|
||||
"""Helper for @contextmanager decorator."""
|
||||
|
||||
def __init__(self, func, args, kwds):
|
||||
self.gen = func(*args, **kwds)
|
||||
self.func, self.args, self.kwds = func, args, kwds
|
||||
# Issue 19330: ensure context manager instances have good docstrings
|
||||
doc = getattr(func, "__doc__", None)
|
||||
if doc is None:
|
||||
doc = type(self).__doc__
|
||||
self.__doc__ = doc
|
||||
# Unfortunately, this still doesn't provide good help output when
|
||||
# inspecting the created context manager instances, since pydoc
|
||||
# currently bypasses the instance docstring and shows the docstring
|
||||
# for the class instead.
|
||||
# See http://bugs.python.org/issue19404 for more details.
|
||||
|
||||
def _recreate_cm(self):
|
||||
# _GCM instances are one-shot context managers, so the
|
||||
# CM must be recreated each time a decorated function is
|
||||
# called
|
||||
return self.__class__(self.func, self.args, self.kwds)
|
||||
|
||||
def __enter__(self):
|
||||
try:
|
||||
return next(self.gen)
|
||||
except StopIteration:
|
||||
raise RuntimeError("generator didn't yield")
|
||||
|
||||
def __exit__(self, type, value, traceback):
|
||||
if type is None:
|
||||
try:
|
||||
next(self.gen)
|
||||
except StopIteration:
|
||||
return
|
||||
else:
|
||||
raise RuntimeError("generator didn't stop")
|
||||
else:
|
||||
if value is None:
|
||||
# Need to force instantiation so we can reliably
|
||||
# tell if we get the same exception back
|
||||
value = type()
|
||||
try:
|
||||
self.gen.throw(type, value, traceback)
|
||||
raise RuntimeError("generator didn't stop after throw()")
|
||||
except StopIteration as exc:
|
||||
# Suppress StopIteration *unless* it's the same exception that
|
||||
# was passed to throw(). This prevents a StopIteration
|
||||
# raised inside the "with" statement from being suppressed.
|
||||
return exc is not value
|
||||
except RuntimeError as exc:
|
||||
# Don't re-raise the passed in exception
|
||||
if exc is value:
|
||||
return False
|
||||
# Likewise, avoid suppressing if a StopIteration exception
|
||||
# was passed to throw() and later wrapped into a RuntimeError
|
||||
# (see PEP 479).
|
||||
if _HAVE_EXCEPTION_CHAINING and exc.__cause__ is value:
|
||||
return False
|
||||
raise
|
||||
except:
|
||||
# only re-raise if it's *not* the exception that was
|
||||
# passed to throw(), because __exit__() must not raise
|
||||
# an exception unless __exit__() itself failed. But throw()
|
||||
# has to raise the exception to signal propagation, so this
|
||||
# fixes the impedance mismatch between the throw() protocol
|
||||
# and the __exit__() protocol.
|
||||
#
|
||||
if sys.exc_info()[1] is not value:
|
||||
raise
|
||||
|
||||
|
||||
def contextmanager(func):
|
||||
"""@contextmanager decorator.
|
||||
|
||||
Typical usage:
|
||||
|
||||
@contextmanager
|
||||
def some_generator(<arguments>):
|
||||
<setup>
|
||||
try:
|
||||
yield <value>
|
||||
finally:
|
||||
<cleanup>
|
||||
|
||||
This makes this:
|
||||
|
||||
with some_generator(<arguments>) as <variable>:
|
||||
<body>
|
||||
|
||||
equivalent to this:
|
||||
|
||||
<setup>
|
||||
try:
|
||||
<variable> = <value>
|
||||
<body>
|
||||
finally:
|
||||
<cleanup>
|
||||
|
||||
"""
|
||||
@wraps(func)
|
||||
def helper(*args, **kwds):
|
||||
return _GeneratorContextManager(func, args, kwds)
|
||||
return helper
|
||||
|
||||
|
||||
class closing(object):
|
||||
"""Context to automatically close something at the end of a block.
|
||||
|
||||
Code like this:
|
||||
|
||||
with closing(<module>.open(<arguments>)) as f:
|
||||
<block>
|
||||
|
||||
is equivalent to this:
|
||||
|
||||
f = <module>.open(<arguments>)
|
||||
try:
|
||||
<block>
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
"""
|
||||
def __init__(self, thing):
|
||||
self.thing = thing
|
||||
|
||||
def __enter__(self):
|
||||
return self.thing
|
||||
|
||||
def __exit__(self, *exc_info):
|
||||
self.thing.close()
|
||||
|
||||
|
||||
class _RedirectStream(object):
|
||||
|
||||
_stream = None
|
||||
|
||||
def __init__(self, new_target):
|
||||
self._new_target = new_target
|
||||
# We use a list of old targets to make this CM re-entrant
|
||||
self._old_targets = []
|
||||
|
||||
def __enter__(self):
|
||||
self._old_targets.append(getattr(sys, self._stream))
|
||||
setattr(sys, self._stream, self._new_target)
|
||||
return self._new_target
|
||||
|
||||
def __exit__(self, exctype, excinst, exctb):
|
||||
setattr(sys, self._stream, self._old_targets.pop())
|
||||
|
||||
|
||||
class redirect_stdout(_RedirectStream):
|
||||
"""Context manager for temporarily redirecting stdout to another file.
|
||||
|
||||
# How to send help() to stderr
|
||||
with redirect_stdout(sys.stderr):
|
||||
help(dir)
|
||||
|
||||
# How to write help() to a file
|
||||
with open('help.txt', 'w') as f:
|
||||
with redirect_stdout(f):
|
||||
help(pow)
|
||||
"""
|
||||
|
||||
_stream = "stdout"
|
||||
|
||||
|
||||
class redirect_stderr(_RedirectStream):
|
||||
"""Context manager for temporarily redirecting stderr to another file."""
|
||||
|
||||
_stream = "stderr"
|
||||
|
||||
|
||||
class suppress(object):
|
||||
"""Context manager to suppress specified exceptions
|
||||
|
||||
After the exception is suppressed, execution proceeds with the next
|
||||
statement following the with statement.
|
||||
|
||||
with suppress(FileNotFoundError):
|
||||
os.remove(somefile)
|
||||
# Execution still resumes here if the file was already removed
|
||||
"""
|
||||
|
||||
def __init__(self, *exceptions):
|
||||
self._exceptions = exceptions
|
||||
|
||||
def __enter__(self):
|
||||
pass
|
||||
|
||||
def __exit__(self, exctype, excinst, exctb):
|
||||
# Unlike isinstance and issubclass, CPython exception handling
|
||||
# currently only looks at the concrete type hierarchy (ignoring
|
||||
# the instance and subclass checking hooks). While Guido considers
|
||||
# that a bug rather than a feature, it's a fairly hard one to fix
|
||||
# due to various internal implementation details. suppress provides
|
||||
# the simpler issubclass based semantics, rather than trying to
|
||||
# exactly reproduce the limitations of the CPython interpreter.
|
||||
#
|
||||
# See http://bugs.python.org/issue12029 for more details
|
||||
return exctype is not None and issubclass(exctype, self._exceptions)
|
||||
|
||||
|
||||
# Context manipulation is Python 3 only
|
||||
_HAVE_EXCEPTION_CHAINING = sys.version_info[0] >= 3
|
||||
if _HAVE_EXCEPTION_CHAINING:
|
||||
def _make_context_fixer(frame_exc):
|
||||
def _fix_exception_context(new_exc, old_exc):
|
||||
# Context may not be correct, so find the end of the chain
|
||||
while 1:
|
||||
exc_context = new_exc.__context__
|
||||
if exc_context is old_exc:
|
||||
# Context is already set correctly (see issue 20317)
|
||||
return
|
||||
if exc_context is None or exc_context is frame_exc:
|
||||
break
|
||||
new_exc = exc_context
|
||||
# Change the end of the chain to point to the exception
|
||||
# we expect it to reference
|
||||
new_exc.__context__ = old_exc
|
||||
return _fix_exception_context
|
||||
|
||||
def _reraise_with_existing_context(exc_details):
|
||||
try:
|
||||
# bare "raise exc_details[1]" replaces our carefully
|
||||
# set-up context
|
||||
fixed_ctx = exc_details[1].__context__
|
||||
raise exc_details[1]
|
||||
except BaseException:
|
||||
exc_details[1].__context__ = fixed_ctx
|
||||
raise
|
||||
else:
|
||||
# No exception context in Python 2
|
||||
def _make_context_fixer(frame_exc):
|
||||
return lambda new_exc, old_exc: None
|
||||
|
||||
# Use 3 argument raise in Python 2,
|
||||
# but use exec to avoid SyntaxError in Python 3
|
||||
def _reraise_with_existing_context(exc_details):
|
||||
exc_type, exc_value, exc_tb = exc_details
|
||||
exec("raise exc_type, exc_value, exc_tb")
|
||||
|
||||
# Handle old-style classes if they exist
|
||||
try:
|
||||
from types import InstanceType
|
||||
except ImportError:
|
||||
# Python 3 doesn't have old-style classes
|
||||
_get_type = type
|
||||
else:
|
||||
# Need to handle old-style context managers on Python 2
|
||||
def _get_type(obj):
|
||||
obj_type = type(obj)
|
||||
if obj_type is InstanceType:
|
||||
return obj.__class__ # Old-style class
|
||||
return obj_type # New-style class
|
||||
|
||||
|
||||
# Inspired by discussions on http://bugs.python.org/issue13585
|
||||
class ExitStack(object):
|
||||
"""Context manager for dynamic management of a stack of exit callbacks
|
||||
|
||||
For example:
|
||||
|
||||
with ExitStack() as stack:
|
||||
files = [stack.enter_context(open(fname)) for fname in filenames]
|
||||
# All opened files will automatically be closed at the end of
|
||||
# the with statement, even if attempts to open files later
|
||||
# in the list raise an exception
|
||||
|
||||
"""
|
||||
def __init__(self):
|
||||
self._exit_callbacks = deque()
|
||||
|
||||
def pop_all(self):
|
||||
"""Preserve the context stack by transferring it to a new instance"""
|
||||
new_stack = type(self)()
|
||||
new_stack._exit_callbacks = self._exit_callbacks
|
||||
self._exit_callbacks = deque()
|
||||
return new_stack
|
||||
|
||||
def _push_cm_exit(self, cm, cm_exit):
|
||||
"""Helper to correctly register callbacks to __exit__ methods"""
|
||||
def _exit_wrapper(*exc_details):
|
||||
return cm_exit(cm, *exc_details)
|
||||
_exit_wrapper.__self__ = cm
|
||||
self.push(_exit_wrapper)
|
||||
|
||||
def push(self, exit):
|
||||
"""Registers a callback with the standard __exit__ method signature
|
||||
|
||||
Can suppress exceptions the same way __exit__ methods can.
|
||||
|
||||
Also accepts any object with an __exit__ method (registering a call
|
||||
to the method instead of the object itself)
|
||||
"""
|
||||
# We use an unbound method rather than a bound method to follow
|
||||
# the standard lookup behaviour for special methods
|
||||
_cb_type = _get_type(exit)
|
||||
try:
|
||||
exit_method = _cb_type.__exit__
|
||||
except AttributeError:
|
||||
# Not a context manager, so assume its a callable
|
||||
self._exit_callbacks.append(exit)
|
||||
else:
|
||||
self._push_cm_exit(exit, exit_method)
|
||||
return exit # Allow use as a decorator
|
||||
|
||||
def callback(self, callback, *args, **kwds):
|
||||
"""Registers an arbitrary callback and arguments.
|
||||
|
||||
Cannot suppress exceptions.
|
||||
"""
|
||||
def _exit_wrapper(exc_type, exc, tb):
|
||||
callback(*args, **kwds)
|
||||
# We changed the signature, so using @wraps is not appropriate, but
|
||||
# setting __wrapped__ may still help with introspection
|
||||
_exit_wrapper.__wrapped__ = callback
|
||||
self.push(_exit_wrapper)
|
||||
return callback # Allow use as a decorator
|
||||
|
||||
def enter_context(self, cm):
|
||||
"""Enters the supplied context manager
|
||||
|
||||
If successful, also pushes its __exit__ method as a callback and
|
||||
returns the result of the __enter__ method.
|
||||
"""
|
||||
# We look up the special methods on the type to match the with statement
|
||||
_cm_type = _get_type(cm)
|
||||
_exit = _cm_type.__exit__
|
||||
result = _cm_type.__enter__(cm)
|
||||
self._push_cm_exit(cm, _exit)
|
||||
return result
|
||||
|
||||
def close(self):
|
||||
"""Immediately unwind the context stack"""
|
||||
self.__exit__(None, None, None)
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *exc_details):
|
||||
received_exc = exc_details[0] is not None
|
||||
|
||||
# We manipulate the exception state so it behaves as though
|
||||
# we were actually nesting multiple with statements
|
||||
frame_exc = sys.exc_info()[1]
|
||||
_fix_exception_context = _make_context_fixer(frame_exc)
|
||||
|
||||
# Callbacks are invoked in LIFO order to match the behaviour of
|
||||
# nested context managers
|
||||
suppressed_exc = False
|
||||
pending_raise = False
|
||||
while self._exit_callbacks:
|
||||
cb = self._exit_callbacks.pop()
|
||||
try:
|
||||
if cb(*exc_details):
|
||||
suppressed_exc = True
|
||||
pending_raise = False
|
||||
exc_details = (None, None, None)
|
||||
except:
|
||||
new_exc_details = sys.exc_info()
|
||||
# simulate the stack of exceptions by setting the context
|
||||
_fix_exception_context(new_exc_details[1], exc_details[1])
|
||||
pending_raise = True
|
||||
exc_details = new_exc_details
|
||||
if pending_raise:
|
||||
_reraise_with_existing_context(exc_details)
|
||||
return received_exc and suppressed_exc
|
||||
|
||||
|
||||
# Preserve backwards compatibility
|
||||
class ContextStack(ExitStack):
|
||||
"""Backwards compatibility alias for ExitStack"""
|
||||
|
||||
def __init__(self):
|
||||
warnings.warn("ContextStack has been renamed to ExitStack",
|
||||
DeprecationWarning)
|
||||
super(ContextStack, self).__init__()
|
||||
|
||||
def register_exit(self, callback):
|
||||
return self.push(callback)
|
||||
|
||||
def register(self, callback, *args, **kwds):
|
||||
return self.callback(callback, *args, **kwds)
|
||||
|
||||
def preserve(self):
|
||||
return self.pop_all()
|
||||
|
||||
|
||||
class nullcontext(AbstractContextManager):
|
||||
"""Context manager that does no additional processing.
|
||||
Used as a stand-in for a normal context manager, when a particular
|
||||
block of code is only sometimes used with a normal context manager:
|
||||
cm = optional_cm if condition else nullcontext()
|
||||
with cm:
|
||||
# Perform operation, using optional_cm if condition is True
|
||||
"""
|
||||
|
||||
def __init__(self, enter_result=None):
|
||||
self.enter_result = enter_result
|
||||
|
||||
def __enter__(self):
|
||||
return self.enter_result
|
||||
|
||||
def __exit__(self, *excinfo):
|
||||
pass
|
1
vendor/importlib_resources-3.3.1.dist-info/INSTALLER
vendored
Normal file
1
vendor/importlib_resources-3.3.1.dist-info/INSTALLER
vendored
Normal file
@ -0,0 +1 @@
|
||||
pip
|
13
vendor/importlib_resources-3.3.1.dist-info/LICENSE
vendored
Normal file
13
vendor/importlib_resources-3.3.1.dist-info/LICENSE
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
Copyright 2017-2019 Brett Cannon, Barry Warsaw
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
41
vendor/importlib_resources-3.3.1.dist-info/METADATA
vendored
Normal file
41
vendor/importlib_resources-3.3.1.dist-info/METADATA
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: importlib-resources
|
||||
Version: 3.3.1
|
||||
Summary: Read resources from Python packages
|
||||
Home-page: https://github.com/python/importlib_resources
|
||||
Author: Barry Warsaw
|
||||
Author-email: barry@python.org
|
||||
License: UNKNOWN
|
||||
Project-URL: Documentation, https://importlib-resources.readthedocs.io/
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: Apache Software License
|
||||
Classifier: Topic :: Software Development :: Libraries
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Requires-Python: !=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7
|
||||
Requires-Dist: pathlib2 ; python_version < "3"
|
||||
Requires-Dist: contextlib2 ; python_version < "3"
|
||||
Requires-Dist: singledispatch ; python_version < "3.4"
|
||||
Requires-Dist: typing ; python_version < "3.5"
|
||||
Requires-Dist: zipp (>=0.4) ; python_version < "3.8"
|
||||
Provides-Extra: docs
|
||||
Requires-Dist: sphinx ; extra == 'docs'
|
||||
Requires-Dist: rst.linker ; extra == 'docs'
|
||||
Requires-Dist: jaraco.packaging ; extra == 'docs'
|
||||
|
||||
``importlib_resources`` is a backport of Python standard library
|
||||
`importlib.resources
|
||||
<https://docs.python.org/3.9/library/importlib.html#module-importlib.resources>`_
|
||||
module for Python 2.7, and 3.6 through 3.8. Users of Python 3.9 and beyond
|
||||
should use the standard library module, since for these versions,
|
||||
``importlib_resources`` just delegates to that module.
|
||||
|
||||
The key goal of this module is to replace parts of `pkg_resources
|
||||
<https://setuptools.readthedocs.io/en/latest/pkg_resources.html>`_ with a
|
||||
solution in Python's stdlib that relies on well-defined APIs. This makes
|
||||
reading resources included in packages easier, with more stable and consistent
|
||||
semantics.
|
||||
|
||||
|
67
vendor/importlib_resources-3.3.1.dist-info/RECORD
vendored
Normal file
67
vendor/importlib_resources-3.3.1.dist-info/RECORD
vendored
Normal file
@ -0,0 +1,67 @@
|
||||
importlib_resources-3.3.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
importlib_resources-3.3.1.dist-info/LICENSE,sha256=uWRjFdYGataJX2ziXk048ItUglQmjng3GWBALaWA36U,568
|
||||
importlib_resources-3.3.1.dist-info/METADATA,sha256=ulI0eHuldtC-h2_WiQal2AE2eoE91x_xKzbz9kpWvlk,1791
|
||||
importlib_resources-3.3.1.dist-info/RECORD,,
|
||||
importlib_resources-3.3.1.dist-info/WHEEL,sha256=oh0NKYrTcu1i1-wgrI1cnhkjYIi8WJ-8qd9Jrr5_y4E,110
|
||||
importlib_resources-3.3.1.dist-info/top_level.txt,sha256=fHIjHU1GZwAjvcydpmUnUrTnbvdiWjG4OEVZK8by0TQ,20
|
||||
importlib_resources/__init__.py,sha256=hswDmLAH0IUlLWwmdHXPN2mgus2bk5IwDP-BFzg7VKo,977
|
||||
importlib_resources/__init__.pyc,,
|
||||
importlib_resources/_common.py,sha256=RN8cXOZtlygvlbyTewd-ni9wC1hwXpfbZnrl7kbx0nI,3121
|
||||
importlib_resources/_common.pyc,,
|
||||
importlib_resources/_compat.py,sha256=NDCXOf1097aDJJx-_pQ0UIktzVx2G1aPIQTRFGx0FHI,3694
|
||||
importlib_resources/_compat.pyc,,
|
||||
importlib_resources/_py2.py,sha256=G9M5mv1ILl8NARGdNX0v9_F_Hb4HUKCS-FCNK63Ajvw,4146
|
||||
importlib_resources/_py2.pyc,,
|
||||
importlib_resources/_py3.py,sha256=Bc-p0UYfPVWXFJ21HLNfVvbVrPJFXBA0g80rqPInkq8,5564
|
||||
importlib_resources/abc.py,sha256=6PX4Nprv39YnAht3NymhHIuSso0ocAKqDJZf-A6BgIw,3894
|
||||
importlib_resources/abc.pyc,,
|
||||
importlib_resources/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/readers.py,sha256=fGuSBoMeeERUVrscN9Grhp0s-wKMy7nMVbCx92vIlGs,3674
|
||||
importlib_resources/readers.pyc,,
|
||||
importlib_resources/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/__init__.pyc,,
|
||||
importlib_resources/tests/_compat.py,sha256=geKWJe8UGXjC181JxmtxR3A_o5VrR4yxolS0xbnxMlw,801
|
||||
importlib_resources/tests/_compat.pyc,,
|
||||
importlib_resources/tests/data01/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/data01/__init__.pyc,,
|
||||
importlib_resources/tests/data01/binary.file,sha256=BU7ewdAhH2JP7Qy8qdT5QAsOSRxDdCryxbCr6_DJkNg,4
|
||||
importlib_resources/tests/data01/subdirectory/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/data01/subdirectory/__init__.pyc,,
|
||||
importlib_resources/tests/data01/subdirectory/binary.file,sha256=BU7ewdAhH2JP7Qy8qdT5QAsOSRxDdCryxbCr6_DJkNg,4
|
||||
importlib_resources/tests/data01/utf-16.file,sha256=t5q9qhxX0rYqItBOM8D3ylwG-RHrnOYteTLtQr6sF7g,44
|
||||
importlib_resources/tests/data01/utf-8.file,sha256=kwWgYG4yQ-ZF2X_WA66EjYPmxJRn-w8aSOiS9e8tKYY,20
|
||||
importlib_resources/tests/data02/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/data02/__init__.pyc,,
|
||||
importlib_resources/tests/data02/one/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/data02/one/__init__.pyc,,
|
||||
importlib_resources/tests/data02/one/resource1.txt,sha256=10flKac7c-XXFzJ3t-AB5MJjlBy__dSZvPE_dOm2q6U,13
|
||||
importlib_resources/tests/data02/two/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/data02/two/__init__.pyc,,
|
||||
importlib_resources/tests/data02/two/resource2.txt,sha256=lt2jbN3TMn9QiFKM832X39bU_62UptDdUkoYzkvEbl0,13
|
||||
importlib_resources/tests/namespacedata01/binary.file,sha256=BU7ewdAhH2JP7Qy8qdT5QAsOSRxDdCryxbCr6_DJkNg,4
|
||||
importlib_resources/tests/namespacedata01/utf-16.file,sha256=t5q9qhxX0rYqItBOM8D3ylwG-RHrnOYteTLtQr6sF7g,44
|
||||
importlib_resources/tests/namespacedata01/utf-8.file,sha256=kwWgYG4yQ-ZF2X_WA66EjYPmxJRn-w8aSOiS9e8tKYY,20
|
||||
importlib_resources/tests/py27compat.py,sha256=9lDJkGV2swPVQJg6isOorRNFWuP6KeoWd4D2bFNmzLI,965
|
||||
importlib_resources/tests/py27compat.pyc,,
|
||||
importlib_resources/tests/test_files.py,sha256=91rf4C74_aJsKNSt-a-03slVpY9QSAuCbogFWnsaPjE,1017
|
||||
importlib_resources/tests/test_files.pyc,,
|
||||
importlib_resources/tests/test_open.py,sha256=pIYWvuTDpQOJKX0SEuOKGotssZcEeY_xNPDqLGCvP_U,2565
|
||||
importlib_resources/tests/test_open.pyc,,
|
||||
importlib_resources/tests/test_path.py,sha256=GnUOu-338o9offnC8xwbXjH9JIQJpD7JujgQkGB106Q,1548
|
||||
importlib_resources/tests/test_path.pyc,,
|
||||
importlib_resources/tests/test_read.py,sha256=DpA7tzxSQlU0_YQuWibB3E5PDL9fQUdzeKoEUGnAx78,2046
|
||||
importlib_resources/tests/test_read.pyc,,
|
||||
importlib_resources/tests/test_reader.py,sha256=yEO0xyrYDcGRmsBC6A1n99GXiTZpVvp-uGA313s6aao,4638
|
||||
importlib_resources/tests/test_reader.pyc,,
|
||||
importlib_resources/tests/test_resource.py,sha256=GbrMeHJ74N6KJG38TDodCp--nsRnFHXJc7NrAEqUPaU,8766
|
||||
importlib_resources/tests/test_resource.pyc,,
|
||||
importlib_resources/tests/util.py,sha256=8hBFwqIZRJFNvkboEB7aWsCqTtgUjlWI_iQ0KV158Yk,5914
|
||||
importlib_resources/tests/util.pyc,,
|
||||
importlib_resources/tests/zipdata01/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/zipdata01/__init__.pyc,,
|
||||
importlib_resources/tests/zipdata01/ziptestdata.zip,sha256=AYf51fj80OKCRis93v2DlZjt5rM-VQOPptSHJbFtkXw,1131
|
||||
importlib_resources/tests/zipdata02/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
importlib_resources/tests/zipdata02/__init__.pyc,,
|
||||
importlib_resources/tests/zipdata02/ziptestdata.zip,sha256=e6HXvTEObXvJcNxyX5I8tu5M8_6mSN8ALahHfqE7ADA,698
|
||||
importlib_resources/trees.py,sha256=U3FlQSI5--eF4AdzOjBvW4xnjL21OFX8ivk82Quwv_M,117
|
||||
importlib_resources/trees.pyc,,
|
6
vendor/importlib_resources-3.3.1.dist-info/WHEEL
vendored
Normal file
6
vendor/importlib_resources-3.3.1.dist-info/WHEEL
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.36.1)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
1
vendor/importlib_resources-3.3.1.dist-info/top_level.txt
vendored
Normal file
1
vendor/importlib_resources-3.3.1.dist-info/top_level.txt
vendored
Normal file
@ -0,0 +1 @@
|
||||
importlib_resources
|
53
vendor/importlib_resources/__init__.py
vendored
Normal file
53
vendor/importlib_resources/__init__.py
vendored
Normal file
@ -0,0 +1,53 @@
|
||||
"""Read resources contained within a package."""
|
||||
|
||||
import sys
|
||||
|
||||
from ._common import (
|
||||
as_file, files,
|
||||
)
|
||||
|
||||
# For compatibility. Ref #88.
|
||||
# Also requires hook-importlib_resources.py (Ref #101).
|
||||
__import__('importlib_resources.trees')
|
||||
|
||||
|
||||
__all__ = [
|
||||
'Package',
|
||||
'Resource',
|
||||
'ResourceReader',
|
||||
'as_file',
|
||||
'contents',
|
||||
'files',
|
||||
'is_resource',
|
||||
'open_binary',
|
||||
'open_text',
|
||||
'path',
|
||||
'read_binary',
|
||||
'read_text',
|
||||
]
|
||||
|
||||
|
||||
if sys.version_info >= (3,):
|
||||
from importlib_resources._py3 import (
|
||||
Package,
|
||||
Resource,
|
||||
contents,
|
||||
is_resource,
|
||||
open_binary,
|
||||
open_text,
|
||||
path,
|
||||
read_binary,
|
||||
read_text,
|
||||
)
|
||||
from importlib_resources.abc import ResourceReader
|
||||
else:
|
||||
from importlib_resources._py2 import (
|
||||
contents,
|
||||
is_resource,
|
||||
open_binary,
|
||||
open_text,
|
||||
path,
|
||||
read_binary,
|
||||
read_text,
|
||||
)
|
||||
del __all__[:3]
|
120
vendor/importlib_resources/_common.py
vendored
Normal file
120
vendor/importlib_resources/_common.py
vendored
Normal file
@ -0,0 +1,120 @@
|
||||
from __future__ import absolute_import
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
import contextlib
|
||||
import types
|
||||
import importlib
|
||||
|
||||
from ._compat import (
|
||||
Path, FileNotFoundError,
|
||||
singledispatch, package_spec,
|
||||
)
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import Union, Any, Optional
|
||||
from .abc import ResourceReader
|
||||
Package = Union[types.ModuleType, str]
|
||||
|
||||
|
||||
def files(package):
|
||||
"""
|
||||
Get a Traversable resource from a package
|
||||
"""
|
||||
return from_package(get_package(package))
|
||||
|
||||
|
||||
def normalize_path(path):
|
||||
# type: (Any) -> str
|
||||
"""Normalize a path by ensuring it is a string.
|
||||
|
||||
If the resulting string contains path separators, an exception is raised.
|
||||
"""
|
||||
str_path = str(path)
|
||||
parent, file_name = os.path.split(str_path)
|
||||
if parent:
|
||||
raise ValueError('{!r} must be only a file name'.format(path))
|
||||
return file_name
|
||||
|
||||
|
||||
def get_resource_reader(package):
|
||||
# type: (types.ModuleType) -> Optional[ResourceReader]
|
||||
"""
|
||||
Return the package's loader if it's a ResourceReader.
|
||||
"""
|
||||
# We can't use
|
||||
# a issubclass() check here because apparently abc.'s __subclasscheck__()
|
||||
# hook wants to create a weak reference to the object, but
|
||||
# zipimport.zipimporter does not support weak references, resulting in a
|
||||
# TypeError. That seems terrible.
|
||||
spec = package.__spec__
|
||||
reader = getattr(spec.loader, 'get_resource_reader', None)
|
||||
if reader is None:
|
||||
return None
|
||||
return reader(spec.name)
|
||||
|
||||
|
||||
def resolve(cand):
|
||||
# type: (Package) -> types.ModuleType
|
||||
return (
|
||||
cand if isinstance(cand, types.ModuleType)
|
||||
else importlib.import_module(cand)
|
||||
)
|
||||
|
||||
|
||||
def get_package(package):
|
||||
# type: (Package) -> types.ModuleType
|
||||
"""Take a package name or module object and return the module.
|
||||
|
||||
Raise an exception if the resolved module is not a package.
|
||||
"""
|
||||
resolved = resolve(package)
|
||||
if package_spec(resolved).submodule_search_locations is None:
|
||||
raise TypeError('{!r} is not a package'.format(package))
|
||||
return resolved
|
||||
|
||||
|
||||
def from_package(package):
|
||||
"""
|
||||
Return a Traversable object for the given package.
|
||||
|
||||
"""
|
||||
spec = package_spec(package)
|
||||
reader = spec.loader.get_resource_reader(spec.name)
|
||||
return reader.files()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _tempfile(reader, suffix=''):
|
||||
# Not using tempfile.NamedTemporaryFile as it leads to deeper 'try'
|
||||
# blocks due to the need to close the temporary file to work on Windows
|
||||
# properly.
|
||||
fd, raw_path = tempfile.mkstemp(suffix=suffix)
|
||||
try:
|
||||
os.write(fd, reader())
|
||||
os.close(fd)
|
||||
del reader
|
||||
yield Path(raw_path)
|
||||
finally:
|
||||
try:
|
||||
os.remove(raw_path)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
|
||||
@singledispatch
|
||||
def as_file(path):
|
||||
"""
|
||||
Given a Traversable object, return that object as a
|
||||
path on the local file system in a context manager.
|
||||
"""
|
||||
return _tempfile(path.read_bytes, suffix=path.name)
|
||||
|
||||
|
||||
@as_file.register(Path)
|
||||
@contextlib.contextmanager
|
||||
def _(path):
|
||||
"""
|
||||
Degenerate behavior for pathlib.Path objects.
|
||||
"""
|
||||
yield path
|
139
vendor/importlib_resources/_compat.py
vendored
Normal file
139
vendor/importlib_resources/_compat.py
vendored
Normal file
@ -0,0 +1,139 @@
|
||||
from __future__ import absolute_import
|
||||
import sys
|
||||
|
||||
# flake8: noqa
|
||||
|
||||
if sys.version_info > (3,5):
|
||||
from pathlib import Path, PurePath
|
||||
else:
|
||||
from pathlib2 import Path, PurePath # type: ignore
|
||||
|
||||
|
||||
if sys.version_info > (3,):
|
||||
from contextlib import suppress
|
||||
else:
|
||||
from contextlib2 import suppress # type: ignore
|
||||
|
||||
|
||||
try:
|
||||
from functools import singledispatch
|
||||
except ImportError:
|
||||
from singledispatch import singledispatch # type: ignore
|
||||
|
||||
|
||||
try:
|
||||
from abc import ABC # type: ignore
|
||||
except ImportError:
|
||||
from abc import ABCMeta
|
||||
|
||||
class ABC(object): # type: ignore
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
|
||||
try:
|
||||
FileNotFoundError = FileNotFoundError # type: ignore
|
||||
except NameError:
|
||||
FileNotFoundError = OSError # type: ignore
|
||||
|
||||
|
||||
try:
|
||||
NotADirectoryError = NotADirectoryError # type: ignore
|
||||
except NameError:
|
||||
NotADirectoryError = OSError # type: ignore
|
||||
|
||||
|
||||
try:
|
||||
from zipfile import Path as ZipPath # type: ignore
|
||||
except ImportError:
|
||||
from zipp import Path as ZipPath # type: ignore
|
||||
|
||||
|
||||
try:
|
||||
from typing import runtime_checkable # type: ignore
|
||||
except ImportError:
|
||||
def runtime_checkable(cls): # type: ignore
|
||||
return cls
|
||||
|
||||
|
||||
try:
|
||||
from typing import Protocol # type: ignore
|
||||
except ImportError:
|
||||
Protocol = ABC # type: ignore
|
||||
|
||||
|
||||
__metaclass__ = type
|
||||
|
||||
|
||||
class PackageSpec:
|
||||
def __init__(self, **kwargs):
|
||||
vars(self).update(kwargs)
|
||||
|
||||
|
||||
class TraversableResourcesAdapter:
|
||||
def __init__(self, spec):
|
||||
self.spec = spec
|
||||
self.loader = LoaderAdapter(spec)
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self.spec, name)
|
||||
|
||||
|
||||
class LoaderAdapter:
|
||||
"""
|
||||
Adapt loaders to provide TraversableResources and other
|
||||
compatibility.
|
||||
"""
|
||||
def __init__(self, spec):
|
||||
self.spec = spec
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
# Python < 3
|
||||
return self.spec.origin
|
||||
|
||||
def get_resource_reader(self, name):
|
||||
# Python < 3.9
|
||||
from . import readers
|
||||
|
||||
def _zip_reader(spec):
|
||||
with suppress(AttributeError):
|
||||
return readers.ZipReader(spec.loader, spec.name)
|
||||
|
||||
def _namespace_reader(spec):
|
||||
with suppress(AttributeError, ValueError):
|
||||
return readers.NamespaceReader(spec.submodule_search_locations)
|
||||
|
||||
def _available_reader(spec):
|
||||
with suppress(AttributeError):
|
||||
return spec.loader.get_resource_reader(spec.name)
|
||||
|
||||
def _native_reader(spec):
|
||||
reader = _available_reader(spec)
|
||||
return reader if hasattr(reader, 'files') else None
|
||||
|
||||
return (
|
||||
# native reader if it supplies 'files'
|
||||
_native_reader(self.spec) or
|
||||
# local ZipReader if a zip module
|
||||
_zip_reader(self.spec) or
|
||||
# local NamespaceReader if a namespace module
|
||||
_namespace_reader(self.spec) or
|
||||
# local FileReader
|
||||
readers.FileReader(self)
|
||||
)
|
||||
|
||||
|
||||
def package_spec(package):
|
||||
"""
|
||||
Construct a minimal package spec suitable for
|
||||
matching the interfaces this library relies upon
|
||||
in later Python versions.
|
||||
"""
|
||||
spec = getattr(package, '__spec__', None) or \
|
||||
PackageSpec(
|
||||
origin=package.__file__,
|
||||
loader=getattr(package, '__loader__', None),
|
||||
name=package.__name__,
|
||||
submodule_search_locations=getattr(package, '__path__', None),
|
||||
)
|
||||
return TraversableResourcesAdapter(spec)
|
107
vendor/importlib_resources/_py2.py
vendored
Normal file
107
vendor/importlib_resources/_py2.py
vendored
Normal file
@ -0,0 +1,107 @@
|
||||
import os
|
||||
import errno
|
||||
|
||||
from . import _common
|
||||
from ._compat import FileNotFoundError
|
||||
from io import BytesIO, TextIOWrapper, open as io_open
|
||||
|
||||
|
||||
def open_binary(package, resource):
|
||||
"""Return a file-like object opened for binary reading of the resource."""
|
||||
resource = _common.normalize_path(resource)
|
||||
package = _common.get_package(package)
|
||||
# Using pathlib doesn't work well here due to the lack of 'strict' argument
|
||||
# for pathlib.Path.resolve() prior to Python 3.6.
|
||||
package_path = os.path.dirname(package.__file__)
|
||||
relative_path = os.path.join(package_path, resource)
|
||||
full_path = os.path.abspath(relative_path)
|
||||
try:
|
||||
return io_open(full_path, 'rb')
|
||||
except IOError:
|
||||
# This might be a package in a zip file. zipimport provides a loader
|
||||
# with a functioning get_data() method, however we have to strip the
|
||||
# archive (i.e. the .zip file's name) off the front of the path. This
|
||||
# is because the zipimport loader in Python 2 doesn't actually follow
|
||||
# PEP 302. It should allow the full path, but actually requires that
|
||||
# the path be relative to the zip file.
|
||||
try:
|
||||
loader = package.__loader__
|
||||
full_path = relative_path[len(loader.archive)+1:]
|
||||
data = loader.get_data(full_path)
|
||||
except (IOError, AttributeError):
|
||||
package_name = package.__name__
|
||||
message = '{!r} resource not found in {!r}'.format(
|
||||
resource, package_name)
|
||||
raise FileNotFoundError(message)
|
||||
return BytesIO(data)
|
||||
|
||||
|
||||
def open_text(package, resource, encoding='utf-8', errors='strict'):
|
||||
"""Return a file-like object opened for text reading of the resource."""
|
||||
return TextIOWrapper(
|
||||
open_binary(package, resource), encoding=encoding, errors=errors)
|
||||
|
||||
|
||||
def read_binary(package, resource):
|
||||
"""Return the binary contents of the resource."""
|
||||
with open_binary(package, resource) as fp:
|
||||
return fp.read()
|
||||
|
||||
|
||||
def read_text(package, resource, encoding='utf-8', errors='strict'):
|
||||
"""Return the decoded string of the resource.
|
||||
|
||||
The decoding-related arguments have the same semantics as those of
|
||||
bytes.decode().
|
||||
"""
|
||||
with open_text(package, resource, encoding, errors) as fp:
|
||||
return fp.read()
|
||||
|
||||
|
||||
def path(package, resource):
|
||||
"""A context manager providing a file path object to the resource.
|
||||
|
||||
If the resource does not already exist on its own on the file system,
|
||||
a temporary file will be created. If the file was created, the file
|
||||
will be deleted upon exiting the context manager (no exception is
|
||||
raised if the file was deleted prior to the context manager
|
||||
exiting).
|
||||
"""
|
||||
path = _common.files(package).joinpath(_common.normalize_path(resource))
|
||||
if not path.is_file():
|
||||
raise FileNotFoundError(path)
|
||||
return _common.as_file(path)
|
||||
|
||||
|
||||
def is_resource(package, name):
|
||||
"""True if name is a resource inside package.
|
||||
|
||||
Directories are *not* resources.
|
||||
"""
|
||||
package = _common.get_package(package)
|
||||
_common.normalize_path(name)
|
||||
try:
|
||||
package_contents = set(contents(package))
|
||||
except OSError as error:
|
||||
if error.errno not in (errno.ENOENT, errno.ENOTDIR):
|
||||
# We won't hit this in the Python 2 tests, so it'll appear
|
||||
# uncovered. We could mock os.listdir() to return a non-ENOENT or
|
||||
# ENOTDIR, but then we'd have to depend on another external
|
||||
# library since Python 2 doesn't have unittest.mock. It's not
|
||||
# worth it.
|
||||
raise # pragma: nocover
|
||||
return False
|
||||
if name not in package_contents:
|
||||
return False
|
||||
return (_common.from_package(package) / name).is_file()
|
||||
|
||||
|
||||
def contents(package):
|
||||
"""Return an iterable of entries in `package`.
|
||||
|
||||
Note that not all entries are resources. Specifically, directories are
|
||||
not considered resources. Use `is_resource()` on each entry returned here
|
||||
to check if it is a resource or not.
|
||||
"""
|
||||
package = _common.get_package(package)
|
||||
return list(item.name for item in _common.from_package(package).iterdir())
|
160
vendor/importlib_resources/_py3.py
vendored
Normal file
160
vendor/importlib_resources/_py3.py
vendored
Normal file
@ -0,0 +1,160 @@
|
||||
import os
|
||||
import io
|
||||
|
||||
from . import _common
|
||||
from contextlib import suppress
|
||||
from importlib.abc import ResourceLoader
|
||||
from io import BytesIO, TextIOWrapper
|
||||
from pathlib import Path
|
||||
from types import ModuleType
|
||||
from typing import Iterable, Iterator, Optional, Set, Union # noqa: F401
|
||||
from typing import cast
|
||||
from typing.io import BinaryIO, TextIO
|
||||
from collections.abc import Sequence
|
||||
from functools import singledispatch
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import ContextManager
|
||||
|
||||
Package = Union[ModuleType, str]
|
||||
Resource = Union[str, os.PathLike]
|
||||
|
||||
|
||||
def open_binary(package: Package, resource: Resource) -> BinaryIO:
|
||||
"""Return a file-like object opened for binary reading of the resource."""
|
||||
resource = _common.normalize_path(resource)
|
||||
package = _common.get_package(package)
|
||||
reader = _common.get_resource_reader(package)
|
||||
if reader is not None:
|
||||
return reader.open_resource(resource)
|
||||
# Using pathlib doesn't work well here due to the lack of 'strict'
|
||||
# argument for pathlib.Path.resolve() prior to Python 3.6.
|
||||
if package.__spec__.submodule_search_locations is not None:
|
||||
paths = package.__spec__.submodule_search_locations
|
||||
elif package.__spec__.origin is not None:
|
||||
paths = [os.path.dirname(os.path.abspath(package.__spec__.origin))]
|
||||
|
||||
for package_path in paths:
|
||||
full_path = os.path.join(package_path, resource)
|
||||
try:
|
||||
return open(full_path, mode='rb')
|
||||
except OSError:
|
||||
# Just assume the loader is a resource loader; all the relevant
|
||||
# importlib.machinery loaders are and an AttributeError for
|
||||
# get_data() will make it clear what is needed from the loader.
|
||||
loader = cast(ResourceLoader, package.__spec__.loader)
|
||||
data = None
|
||||
if hasattr(package.__spec__.loader, 'get_data'):
|
||||
with suppress(OSError):
|
||||
data = loader.get_data(full_path)
|
||||
if data is not None:
|
||||
return BytesIO(data)
|
||||
|
||||
raise FileNotFoundError('{!r} resource not found in {!r}'.format(
|
||||
resource, package.__spec__.name))
|
||||
|
||||
|
||||
def open_text(package: Package,
|
||||
resource: Resource,
|
||||
encoding: str = 'utf-8',
|
||||
errors: str = 'strict') -> TextIO:
|
||||
"""Return a file-like object opened for text reading of the resource."""
|
||||
return TextIOWrapper(
|
||||
open_binary(package, resource), encoding=encoding, errors=errors)
|
||||
|
||||
|
||||
def read_binary(package: Package, resource: Resource) -> bytes:
|
||||
"""Return the binary contents of the resource."""
|
||||
with open_binary(package, resource) as fp:
|
||||
return fp.read()
|
||||
|
||||
|
||||
def read_text(package: Package,
|
||||
resource: Resource,
|
||||
encoding: str = 'utf-8',
|
||||
errors: str = 'strict') -> str:
|
||||
"""Return the decoded string of the resource.
|
||||
|
||||
The decoding-related arguments have the same semantics as those of
|
||||
bytes.decode().
|
||||
"""
|
||||
with open_text(package, resource, encoding, errors) as fp:
|
||||
return fp.read()
|
||||
|
||||
|
||||
def path(
|
||||
package: Package, resource: Resource,
|
||||
) -> 'ContextManager[Path]':
|
||||
"""A context manager providing a file path object to the resource.
|
||||
|
||||
If the resource does not already exist on its own on the file system,
|
||||
a temporary file will be created. If the file was created, the file
|
||||
will be deleted upon exiting the context manager (no exception is
|
||||
raised if the file was deleted prior to the context manager
|
||||
exiting).
|
||||
"""
|
||||
reader = _common.get_resource_reader(_common.get_package(package))
|
||||
return (
|
||||
_path_from_reader(reader, _common.normalize_path(resource))
|
||||
if reader else
|
||||
_common.as_file(
|
||||
_common.files(package).joinpath(_common.normalize_path(resource)))
|
||||
)
|
||||
|
||||
|
||||
def _path_from_reader(reader, resource):
|
||||
return _path_from_resource_path(reader, resource) or \
|
||||
_path_from_open_resource(reader, resource)
|
||||
|
||||
|
||||
def _path_from_resource_path(reader, resource):
|
||||
with suppress(FileNotFoundError):
|
||||
return Path(reader.resource_path(resource))
|
||||
|
||||
|
||||
def _path_from_open_resource(reader, resource):
|
||||
saved = io.BytesIO(reader.open_resource(resource).read())
|
||||
return _common._tempfile(saved.read, suffix=resource)
|
||||
|
||||
|
||||
def is_resource(package: Package, name: str) -> bool:
|
||||
"""True if `name` is a resource inside `package`.
|
||||
|
||||
Directories are *not* resources.
|
||||
"""
|
||||
package = _common.get_package(package)
|
||||
_common.normalize_path(name)
|
||||
reader = _common.get_resource_reader(package)
|
||||
if reader is not None:
|
||||
return reader.is_resource(name)
|
||||
package_contents = set(contents(package))
|
||||
if name not in package_contents:
|
||||
return False
|
||||
return (_common.from_package(package) / name).is_file()
|
||||
|
||||
|
||||
def contents(package: Package) -> Iterable[str]:
|
||||
"""Return an iterable of entries in `package`.
|
||||
|
||||
Note that not all entries are resources. Specifically, directories are
|
||||
not considered resources. Use `is_resource()` on each entry returned here
|
||||
to check if it is a resource or not.
|
||||
"""
|
||||
package = _common.get_package(package)
|
||||
reader = _common.get_resource_reader(package)
|
||||
if reader is not None:
|
||||
return _ensure_sequence(reader.contents())
|
||||
transversable = _common.from_package(package)
|
||||
if transversable.is_dir():
|
||||
return list(item.name for item in transversable.iterdir())
|
||||
return []
|
||||
|
||||
|
||||
@singledispatch
|
||||
def _ensure_sequence(iterable):
|
||||
return list(iterable)
|
||||
|
||||
|
||||
@_ensure_sequence.register(Sequence)
|
||||
def _(iterable):
|
||||
return iterable
|
142
vendor/importlib_resources/abc.py
vendored
Normal file
142
vendor/importlib_resources/abc.py
vendored
Normal file
@ -0,0 +1,142 @@
|
||||
from __future__ import absolute_import
|
||||
|
||||
import abc
|
||||
|
||||
from ._compat import ABC, FileNotFoundError, runtime_checkable, Protocol
|
||||
|
||||
# Use mypy's comment syntax for Python 2 compatibility
|
||||
try:
|
||||
from typing import BinaryIO, Iterable, Text
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
|
||||
class ResourceReader(ABC):
|
||||
"""Abstract base class for loaders to provide resource reading support."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def open_resource(self, resource):
|
||||
# type: (Text) -> BinaryIO
|
||||
"""Return an opened, file-like object for binary reading.
|
||||
|
||||
The 'resource' argument is expected to represent only a file name.
|
||||
If the resource cannot be found, FileNotFoundError is raised.
|
||||
"""
|
||||
# This deliberately raises FileNotFoundError instead of
|
||||
# NotImplementedError so that if this method is accidentally called,
|
||||
# it'll still do the right thing.
|
||||
raise FileNotFoundError
|
||||
|
||||
@abc.abstractmethod
|
||||
def resource_path(self, resource):
|
||||
# type: (Text) -> Text
|
||||
"""Return the file system path to the specified resource.
|
||||
|
||||
The 'resource' argument is expected to represent only a file name.
|
||||
If the resource does not exist on the file system, raise
|
||||
FileNotFoundError.
|
||||
"""
|
||||
# This deliberately raises FileNotFoundError instead of
|
||||
# NotImplementedError so that if this method is accidentally called,
|
||||
# it'll still do the right thing.
|
||||
raise FileNotFoundError
|
||||
|
||||
@abc.abstractmethod
|
||||
def is_resource(self, path):
|
||||
# type: (Text) -> bool
|
||||
"""Return True if the named 'path' is a resource.
|
||||
|
||||
Files are resources, directories are not.
|
||||
"""
|
||||
raise FileNotFoundError
|
||||
|
||||
@abc.abstractmethod
|
||||
def contents(self):
|
||||
# type: () -> Iterable[str]
|
||||
"""Return an iterable of entries in `package`."""
|
||||
raise FileNotFoundError
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class Traversable(Protocol):
|
||||
"""
|
||||
An object with a subset of pathlib.Path methods suitable for
|
||||
traversing directories and opening files.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def iterdir(self):
|
||||
"""
|
||||
Yield Traversable objects in self
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def read_bytes(self):
|
||||
"""
|
||||
Read contents of self as bytes
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def read_text(self, encoding=None):
|
||||
"""
|
||||
Read contents of self as bytes
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def is_dir(self):
|
||||
"""
|
||||
Return True if self is a dir
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def is_file(self):
|
||||
"""
|
||||
Return True if self is a file
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def joinpath(self, child):
|
||||
"""
|
||||
Return Traversable child in self
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def __truediv__(self, child):
|
||||
"""
|
||||
Return Traversable child in self
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def open(self, mode='r', *args, **kwargs):
|
||||
"""
|
||||
mode may be 'r' or 'rb' to open as text or binary. Return a handle
|
||||
suitable for reading (same as pathlib.Path.open).
|
||||
|
||||
When opening as text, accepts encoding parameters such as those
|
||||
accepted by io.TextIOWrapper.
|
||||
"""
|
||||
|
||||
@abc.abstractproperty
|
||||
def name(self):
|
||||
# type: () -> str
|
||||
"""
|
||||
The base name of this object without any parent references.
|
||||
"""
|
||||
|
||||
|
||||
class TraversableResources(ResourceReader):
|
||||
@abc.abstractmethod
|
||||
def files(self):
|
||||
"""Return a Traversable object for the loaded package."""
|
||||
|
||||
def open_resource(self, resource):
|
||||
return self.files().joinpath(resource).open('rb')
|
||||
|
||||
def resource_path(self, resource):
|
||||
raise FileNotFoundError(resource)
|
||||
|
||||
def is_resource(self, path):
|
||||
return self.files().joinpath(path).is_file()
|
||||
|
||||
def contents(self):
|
||||
return (item.name for item in self.files().iterdir())
|
0
vendor/importlib_resources/py.typed
vendored
Normal file
0
vendor/importlib_resources/py.typed
vendored
Normal file
123
vendor/importlib_resources/readers.py
vendored
Normal file
123
vendor/importlib_resources/readers.py
vendored
Normal file
@ -0,0 +1,123 @@
|
||||
import os.path
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from . import abc
|
||||
|
||||
from ._compat import Path, ZipPath
|
||||
from ._compat import FileNotFoundError, NotADirectoryError
|
||||
|
||||
|
||||
class FileReader(abc.TraversableResources):
|
||||
def __init__(self, loader):
|
||||
self.path = Path(loader.path).parent
|
||||
|
||||
def resource_path(self, resource):
|
||||
"""
|
||||
Return the file system path to prevent
|
||||
`resources.path()` from creating a temporary
|
||||
copy.
|
||||
"""
|
||||
return str(self.path.joinpath(resource))
|
||||
|
||||
def files(self):
|
||||
return self.path
|
||||
|
||||
|
||||
class ZipReader(abc.TraversableResources):
|
||||
def __init__(self, loader, module):
|
||||
_, _, name = module.rpartition('.')
|
||||
self.prefix = loader.prefix.replace('\\', '/') + name + '/'
|
||||
self.archive = loader.archive
|
||||
|
||||
def open_resource(self, resource):
|
||||
try:
|
||||
return super().open_resource(resource)
|
||||
except KeyError as exc:
|
||||
raise FileNotFoundError(exc.args[0])
|
||||
|
||||
def is_resource(self, path):
|
||||
# workaround for `zipfile.Path.is_file` returning true
|
||||
# for non-existent paths.
|
||||
target = self.files().joinpath(path)
|
||||
return target.is_file() and target.exists()
|
||||
|
||||
def files(self):
|
||||
return ZipPath(self.archive, self.prefix)
|
||||
|
||||
|
||||
class MultiplexedPath(abc.Traversable):
|
||||
"""
|
||||
Given a series of Traversable objects, implement a merged
|
||||
version of the interface across all objects. Useful for
|
||||
namespace packages which may be multihomed at a single
|
||||
name.
|
||||
"""
|
||||
def __init__(self, *paths):
|
||||
paths = list(OrderedDict.fromkeys(paths)) # remove duplicates
|
||||
self._paths = list(map(Path, paths))
|
||||
if not self._paths:
|
||||
message = 'MultiplexedPath must contain at least one path'
|
||||
raise FileNotFoundError(message)
|
||||
if any(not path.is_dir() for path in self._paths):
|
||||
raise NotADirectoryError(
|
||||
'MultiplexedPath only supports directories')
|
||||
|
||||
def iterdir(self):
|
||||
visited = []
|
||||
for path in self._paths:
|
||||
for file in path.iterdir():
|
||||
if file.name in visited:
|
||||
continue
|
||||
visited.append(file.name)
|
||||
yield file
|
||||
|
||||
def read_bytes(self):
|
||||
raise FileNotFoundError('{} is not a file'.format(self))
|
||||
|
||||
def read_text(self, *args, **kwargs):
|
||||
raise FileNotFoundError('{} is not a file'.format(self))
|
||||
|
||||
def is_dir(self):
|
||||
return True
|
||||
|
||||
def is_file(self):
|
||||
return False
|
||||
|
||||
def joinpath(self, child):
|
||||
# first try to find child in current paths
|
||||
for file in self.iterdir():
|
||||
if file.name == child:
|
||||
return file
|
||||
# if it does not exist, construct it with the first path
|
||||
return self._paths[0] / child
|
||||
|
||||
__truediv__ = joinpath
|
||||
|
||||
def open(self, *args, **kwargs):
|
||||
raise FileNotFoundError('{} is not a file'.format(self))
|
||||
|
||||
def name(self):
|
||||
return os.path.basename(self._paths[0])
|
||||
|
||||
def __repr__(self):
|
||||
return 'MultiplexedPath({})'.format(
|
||||
', '.join("'{}'".format(path) for path in self._paths))
|
||||
|
||||
|
||||
class NamespaceReader(abc.TraversableResources):
|
||||
def __init__(self, namespace_path):
|
||||
if 'NamespacePath' not in str(namespace_path):
|
||||
raise ValueError('Invalid path')
|
||||
self.path = MultiplexedPath(*list(namespace_path))
|
||||
|
||||
def resource_path(self, resource):
|
||||
"""
|
||||
Return the file system path to prevent
|
||||
`resources.path()` from creating a temporary
|
||||
copy.
|
||||
"""
|
||||
return str(self.path.joinpath(resource))
|
||||
|
||||
def files(self):
|
||||
return self.path
|
0
vendor/importlib_resources/tests/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/__init__.py
vendored
Normal file
30
vendor/importlib_resources/tests/_compat.py
vendored
Normal file
30
vendor/importlib_resources/tests/_compat.py
vendored
Normal file
@ -0,0 +1,30 @@
|
||||
try:
|
||||
from test.support import import_helper # type: ignore
|
||||
except ImportError:
|
||||
try:
|
||||
# Python 3.9 and earlier
|
||||
class import_helper: # type: ignore
|
||||
from test.support import modules_setup, modules_cleanup
|
||||
except ImportError:
|
||||
from . import py27compat
|
||||
|
||||
class import_helper: # type: ignore
|
||||
modules_setup = staticmethod(py27compat.modules_setup)
|
||||
modules_cleanup = staticmethod(py27compat.modules_cleanup)
|
||||
|
||||
|
||||
try:
|
||||
from os import fspath # type: ignore
|
||||
except ImportError:
|
||||
# Python 3.5
|
||||
fspath = str # type: ignore
|
||||
|
||||
|
||||
try:
|
||||
# Python 3.10
|
||||
from test.support.os_helper import unlink
|
||||
except ImportError:
|
||||
from test.support import unlink as _unlink
|
||||
|
||||
def unlink(target):
|
||||
return _unlink(fspath(target))
|
0
vendor/importlib_resources/tests/data01/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/data01/__init__.py
vendored
Normal file
BIN
vendor/importlib_resources/tests/data01/binary.file
vendored
Normal file
BIN
vendor/importlib_resources/tests/data01/binary.file
vendored
Normal file
Binary file not shown.
0
vendor/importlib_resources/tests/data01/subdirectory/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/data01/subdirectory/__init__.py
vendored
Normal file
BIN
vendor/importlib_resources/tests/data01/subdirectory/binary.file
vendored
Normal file
BIN
vendor/importlib_resources/tests/data01/subdirectory/binary.file
vendored
Normal file
Binary file not shown.
BIN
vendor/importlib_resources/tests/data01/utf-16.file
vendored
Normal file
BIN
vendor/importlib_resources/tests/data01/utf-16.file
vendored
Normal file
Binary file not shown.
1
vendor/importlib_resources/tests/data01/utf-8.file
vendored
Normal file
1
vendor/importlib_resources/tests/data01/utf-8.file
vendored
Normal file
@ -0,0 +1 @@
|
||||
Hello, UTF-8 world!
|
0
vendor/importlib_resources/tests/data02/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/data02/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/data02/one/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/data02/one/__init__.py
vendored
Normal file
1
vendor/importlib_resources/tests/data02/one/resource1.txt
vendored
Normal file
1
vendor/importlib_resources/tests/data02/one/resource1.txt
vendored
Normal file
@ -0,0 +1 @@
|
||||
one resource
|
0
vendor/importlib_resources/tests/data02/two/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/data02/two/__init__.py
vendored
Normal file
1
vendor/importlib_resources/tests/data02/two/resource2.txt
vendored
Normal file
1
vendor/importlib_resources/tests/data02/two/resource2.txt
vendored
Normal file
@ -0,0 +1 @@
|
||||
two resource
|
BIN
vendor/importlib_resources/tests/namespacedata01/binary.file
vendored
Normal file
BIN
vendor/importlib_resources/tests/namespacedata01/binary.file
vendored
Normal file
Binary file not shown.
BIN
vendor/importlib_resources/tests/namespacedata01/utf-16.file
vendored
Normal file
BIN
vendor/importlib_resources/tests/namespacedata01/utf-16.file
vendored
Normal file
Binary file not shown.
1
vendor/importlib_resources/tests/namespacedata01/utf-8.file
vendored
Normal file
1
vendor/importlib_resources/tests/namespacedata01/utf-8.file
vendored
Normal file
@ -0,0 +1 @@
|
||||
Hello, UTF-8 world!
|
23
vendor/importlib_resources/tests/py27compat.py
vendored
Normal file
23
vendor/importlib_resources/tests/py27compat.py
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
import sys
|
||||
|
||||
|
||||
def modules_setup():
|
||||
return sys.modules.copy(),
|
||||
|
||||
|
||||
def modules_cleanup(oldmodules):
|
||||
# Encoders/decoders are registered permanently within the internal
|
||||
# codec cache. If we destroy the corresponding modules their
|
||||
# globals will be set to None which will trip up the cached functions.
|
||||
encodings = [(k, v) for k, v in sys.modules.items()
|
||||
if k.startswith('encodings.')]
|
||||
sys.modules.clear()
|
||||
sys.modules.update(encodings)
|
||||
# XXX: This kind of problem can affect more than just encodings.
|
||||
# In particular extension modules (such as _ssl) don't cope
|
||||
# with reloading properly. Really, test modules should be cleaning
|
||||
# out the test specific modules they know they added (ala test_runpy)
|
||||
# rather than relying on this function (as test_importhooks and test_pkg
|
||||
# do currently). Implicitly imported *real* modules should be left alone
|
||||
# (see issue 10556).
|
||||
sys.modules.update(oldmodules)
|
39
vendor/importlib_resources/tests/test_files.py
vendored
Normal file
39
vendor/importlib_resources/tests/test_files.py
vendored
Normal file
@ -0,0 +1,39 @@
|
||||
import typing
|
||||
import unittest
|
||||
|
||||
import importlib_resources as resources
|
||||
from importlib_resources.abc import Traversable
|
||||
from . import data01
|
||||
from . import util
|
||||
|
||||
|
||||
class FilesTests:
|
||||
def test_read_bytes(self):
|
||||
files = resources.files(self.data)
|
||||
actual = files.joinpath('utf-8.file').read_bytes()
|
||||
assert actual == b'Hello, UTF-8 world!\n'
|
||||
|
||||
def test_read_text(self):
|
||||
files = resources.files(self.data)
|
||||
actual = files.joinpath('utf-8.file').read_text()
|
||||
assert actual == 'Hello, UTF-8 world!\n'
|
||||
|
||||
@unittest.skipUnless(
|
||||
hasattr(typing, 'runtime_checkable'),
|
||||
"Only suitable when typing supports runtime_checkable",
|
||||
)
|
||||
def test_traversable(self):
|
||||
assert isinstance(resources.files(self.data), Traversable)
|
||||
|
||||
|
||||
class OpenDiskTests(FilesTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.data = data01
|
||||
|
||||
|
||||
class OpenZipTests(FilesTests, util.ZipSetup, unittest.TestCase):
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
84
vendor/importlib_resources/tests/test_open.py
vendored
Normal file
84
vendor/importlib_resources/tests/test_open.py
vendored
Normal file
@ -0,0 +1,84 @@
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
import importlib_resources as resources
|
||||
from . import data01
|
||||
from . import util
|
||||
from .._compat import FileNotFoundError
|
||||
|
||||
|
||||
class CommonBinaryTests(util.CommonTests, unittest.TestCase):
|
||||
def execute(self, package, path):
|
||||
with resources.open_binary(package, path):
|
||||
pass
|
||||
|
||||
|
||||
class CommonTextTests(util.CommonTests, unittest.TestCase):
|
||||
def execute(self, package, path):
|
||||
with resources.open_text(package, path):
|
||||
pass
|
||||
|
||||
|
||||
class OpenTests:
|
||||
def test_open_binary(self):
|
||||
with resources.open_binary(self.data, 'utf-8.file') as fp:
|
||||
result = fp.read()
|
||||
self.assertEqual(result, b'Hello, UTF-8 world!\n')
|
||||
|
||||
def test_open_text_default_encoding(self):
|
||||
with resources.open_text(self.data, 'utf-8.file') as fp:
|
||||
result = fp.read()
|
||||
self.assertEqual(result, 'Hello, UTF-8 world!\n')
|
||||
|
||||
def test_open_text_given_encoding(self):
|
||||
with resources.open_text(
|
||||
self.data, 'utf-16.file', 'utf-16', 'strict') as fp:
|
||||
result = fp.read()
|
||||
self.assertEqual(result, 'Hello, UTF-16 world!\n')
|
||||
|
||||
def test_open_text_with_errors(self):
|
||||
# Raises UnicodeError without the 'errors' argument.
|
||||
with resources.open_text(
|
||||
self.data, 'utf-16.file', 'utf-8', 'strict') as fp:
|
||||
self.assertRaises(UnicodeError, fp.read)
|
||||
with resources.open_text(
|
||||
self.data, 'utf-16.file', 'utf-8', 'ignore') as fp:
|
||||
result = fp.read()
|
||||
self.assertEqual(
|
||||
result,
|
||||
'H\x00e\x00l\x00l\x00o\x00,\x00 '
|
||||
'\x00U\x00T\x00F\x00-\x001\x006\x00 '
|
||||
'\x00w\x00o\x00r\x00l\x00d\x00!\x00\n\x00')
|
||||
|
||||
def test_open_binary_FileNotFoundError(self):
|
||||
self.assertRaises(
|
||||
FileNotFoundError,
|
||||
resources.open_binary, self.data, 'does-not-exist')
|
||||
|
||||
def test_open_text_FileNotFoundError(self):
|
||||
self.assertRaises(
|
||||
FileNotFoundError,
|
||||
resources.open_text, self.data, 'does-not-exist')
|
||||
|
||||
|
||||
class OpenDiskTests(OpenTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.data = data01
|
||||
|
||||
|
||||
@unittest.skipUnless(
|
||||
sys.version_info[0] >= 3,
|
||||
'namespace packages not available on Python 2'
|
||||
)
|
||||
class OpenDiskNamespaceTests(OpenTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
from . import namespacedata01
|
||||
self.data = namespacedata01
|
||||
|
||||
|
||||
class OpenZipTests(OpenTests, util.ZipSetup, unittest.TestCase):
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
51
vendor/importlib_resources/tests/test_path.py
vendored
Normal file
51
vendor/importlib_resources/tests/test_path.py
vendored
Normal file
@ -0,0 +1,51 @@
|
||||
import unittest
|
||||
|
||||
import importlib_resources as resources
|
||||
from . import data01
|
||||
from . import util
|
||||
|
||||
|
||||
class CommonTests(util.CommonTests, unittest.TestCase):
|
||||
|
||||
def execute(self, package, path):
|
||||
with resources.path(package, path):
|
||||
pass
|
||||
|
||||
|
||||
class PathTests:
|
||||
|
||||
def test_reading(self):
|
||||
# Path should be readable.
|
||||
# Test also implicitly verifies the returned object is a pathlib.Path
|
||||
# instance.
|
||||
with resources.path(self.data, 'utf-8.file') as path:
|
||||
self.assertTrue(path.name.endswith("utf-8.file"), repr(path))
|
||||
# pathlib.Path.read_text() was introduced in Python 3.5.
|
||||
with path.open('r', encoding='utf-8') as file:
|
||||
text = file.read()
|
||||
self.assertEqual('Hello, UTF-8 world!\n', text)
|
||||
|
||||
|
||||
class PathDiskTests(PathTests, unittest.TestCase):
|
||||
data = data01
|
||||
|
||||
def test_natural_path(self):
|
||||
"""
|
||||
Guarantee the internal implementation detail that
|
||||
file-system-backed resources do not get the tempdir
|
||||
treatment.
|
||||
"""
|
||||
with resources.path(self.data, 'utf-8.file') as path:
|
||||
assert 'data' in str(path)
|
||||
|
||||
|
||||
class PathZipTests(PathTests, util.ZipSetup, unittest.TestCase):
|
||||
def test_remove_in_context_manager(self):
|
||||
# It is not an error if the file that was temporarily stashed on the
|
||||
# file system is removed inside the `with` stanza.
|
||||
with resources.path(self.data, 'utf-8.file') as path:
|
||||
path.unlink()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
63
vendor/importlib_resources/tests/test_read.py
vendored
Normal file
63
vendor/importlib_resources/tests/test_read.py
vendored
Normal file
@ -0,0 +1,63 @@
|
||||
import unittest
|
||||
import importlib_resources as resources
|
||||
|
||||
from . import data01
|
||||
from . import util
|
||||
from importlib import import_module
|
||||
|
||||
|
||||
class CommonBinaryTests(util.CommonTests, unittest.TestCase):
|
||||
def execute(self, package, path):
|
||||
resources.read_binary(package, path)
|
||||
|
||||
|
||||
class CommonTextTests(util.CommonTests, unittest.TestCase):
|
||||
def execute(self, package, path):
|
||||
resources.read_text(package, path)
|
||||
|
||||
|
||||
class ReadTests:
|
||||
def test_read_binary(self):
|
||||
result = resources.read_binary(self.data, 'binary.file')
|
||||
self.assertEqual(result, b'\0\1\2\3')
|
||||
|
||||
def test_read_text_default_encoding(self):
|
||||
result = resources.read_text(self.data, 'utf-8.file')
|
||||
self.assertEqual(result, 'Hello, UTF-8 world!\n')
|
||||
|
||||
def test_read_text_given_encoding(self):
|
||||
result = resources.read_text(
|
||||
self.data, 'utf-16.file', encoding='utf-16')
|
||||
self.assertEqual(result, 'Hello, UTF-16 world!\n')
|
||||
|
||||
def test_read_text_with_errors(self):
|
||||
# Raises UnicodeError without the 'errors' argument.
|
||||
self.assertRaises(
|
||||
UnicodeError, resources.read_text, self.data, 'utf-16.file')
|
||||
result = resources.read_text(self.data, 'utf-16.file', errors='ignore')
|
||||
self.assertEqual(
|
||||
result,
|
||||
'H\x00e\x00l\x00l\x00o\x00,\x00 '
|
||||
'\x00U\x00T\x00F\x00-\x001\x006\x00 '
|
||||
'\x00w\x00o\x00r\x00l\x00d\x00!\x00\n\x00')
|
||||
|
||||
|
||||
class ReadDiskTests(ReadTests, unittest.TestCase):
|
||||
data = data01
|
||||
|
||||
|
||||
class ReadZipTests(ReadTests, util.ZipSetup, unittest.TestCase):
|
||||
def test_read_submodule_resource(self):
|
||||
submodule = import_module('ziptestdata.subdirectory')
|
||||
result = resources.read_binary(
|
||||
submodule, 'binary.file')
|
||||
self.assertEqual(result, b'\0\1\2\3')
|
||||
|
||||
def test_read_submodule_resource_by_name(self):
|
||||
result = resources.read_binary(
|
||||
'ziptestdata.subdirectory', 'binary.file')
|
||||
self.assertEqual(result, b'\0\1\2\3')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
145
vendor/importlib_resources/tests/test_reader.py
vendored
Normal file
145
vendor/importlib_resources/tests/test_reader.py
vendored
Normal file
@ -0,0 +1,145 @@
|
||||
import os.path
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
from importlib import import_module
|
||||
from importlib_resources.readers import MultiplexedPath, NamespaceReader
|
||||
|
||||
from .._compat import FileNotFoundError, NotADirectoryError
|
||||
|
||||
|
||||
class MultiplexedPathTest(unittest.TestCase):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
cls.folder = os.path.abspath(
|
||||
os.path.join(__file__, '..', 'namespacedata01')
|
||||
)
|
||||
|
||||
def test_init_no_paths(self):
|
||||
with self.assertRaises(FileNotFoundError):
|
||||
MultiplexedPath()
|
||||
|
||||
def test_init_file(self):
|
||||
with self.assertRaises(NotADirectoryError):
|
||||
MultiplexedPath(os.path.join(self.folder, 'binary.file'))
|
||||
|
||||
def test_iterdir(self):
|
||||
contents = {
|
||||
path.name for path in MultiplexedPath(self.folder).iterdir()
|
||||
}
|
||||
try:
|
||||
contents.remove('__pycache__')
|
||||
except (KeyError, ValueError):
|
||||
pass
|
||||
self.assertEqual(
|
||||
contents,
|
||||
{'binary.file', 'utf-16.file', 'utf-8.file'}
|
||||
)
|
||||
|
||||
def test_iterdir_duplicate(self):
|
||||
data01 = os.path.abspath(
|
||||
os.path.join(__file__, '..', 'data01')
|
||||
)
|
||||
contents = {
|
||||
path.name for path in
|
||||
MultiplexedPath(self.folder, data01).iterdir()
|
||||
}
|
||||
for remove in ('__pycache__', '__init__.pyc'):
|
||||
try:
|
||||
contents.remove(remove)
|
||||
except (KeyError, ValueError):
|
||||
pass
|
||||
self.assertEqual(contents, {
|
||||
'__init__.py',
|
||||
'binary.file',
|
||||
'subdirectory',
|
||||
'utf-16.file',
|
||||
'utf-8.file'
|
||||
})
|
||||
|
||||
def test_is_dir(self):
|
||||
self.assertEqual(MultiplexedPath(self.folder).is_dir(), True)
|
||||
|
||||
def test_is_file(self):
|
||||
self.assertEqual(MultiplexedPath(self.folder).is_file(), False)
|
||||
|
||||
def test_open_file(self):
|
||||
path = MultiplexedPath(self.folder)
|
||||
with self.assertRaises(FileNotFoundError):
|
||||
path.read_bytes()
|
||||
with self.assertRaises(FileNotFoundError):
|
||||
path.read_text()
|
||||
with self.assertRaises(FileNotFoundError):
|
||||
path.open()
|
||||
|
||||
def test_join_path(self):
|
||||
print('test_join_path')
|
||||
prefix = os.path.abspath(os.path.join(__file__, '..'))
|
||||
data01 = os.path.join(prefix, 'data01')
|
||||
path = MultiplexedPath(self.folder, data01)
|
||||
self.assertEqual(
|
||||
str(path.joinpath('binary.file'))[len(prefix)+1:],
|
||||
os.path.join('namespacedata01', 'binary.file')
|
||||
)
|
||||
self.assertEqual(
|
||||
str(path.joinpath('subdirectory'))[len(prefix)+1:],
|
||||
os.path.join('data01', 'subdirectory')
|
||||
)
|
||||
self.assertEqual(
|
||||
str(path.joinpath('imaginary'))[len(prefix)+1:],
|
||||
os.path.join('namespacedata01', 'imaginary')
|
||||
)
|
||||
|
||||
def test_repr(self):
|
||||
self.assertEqual(
|
||||
repr(MultiplexedPath(self.folder)),
|
||||
"MultiplexedPath('{}')".format(self.folder)
|
||||
)
|
||||
|
||||
|
||||
class NamespaceReaderTest(unittest.TestCase):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
sys.path.append(os.path.abspath(os.path.join(__file__, '..')))
|
||||
|
||||
def test_init_error(self):
|
||||
with self.assertRaises(ValueError):
|
||||
NamespaceReader(['path1', 'path2'])
|
||||
|
||||
@unittest.skipUnless(
|
||||
sys.version_info[0] >= 3,
|
||||
'namespace packages not available on Python 2'
|
||||
)
|
||||
def test_resource_path(self):
|
||||
namespacedata01 = import_module('namespacedata01')
|
||||
reader = NamespaceReader(
|
||||
namespacedata01.__spec__.submodule_search_locations
|
||||
)
|
||||
|
||||
root = os.path.abspath(os.path.join(__file__, '..', 'namespacedata01'))
|
||||
self.assertEqual(reader.resource_path('binary.file'), os.path.join(
|
||||
root, 'binary.file'
|
||||
))
|
||||
self.assertEqual(reader.resource_path('imaginary'), os.path.join(
|
||||
root, 'imaginary'
|
||||
))
|
||||
|
||||
@unittest.skipUnless(
|
||||
sys.version_info[0] >= 3,
|
||||
'namespace packages not available on Python 2'
|
||||
)
|
||||
def test_files(self):
|
||||
namespacedata01 = import_module('namespacedata01')
|
||||
reader = NamespaceReader(
|
||||
namespacedata01.__spec__.submodule_search_locations
|
||||
)
|
||||
root = os.path.abspath(os.path.join(__file__, '..', 'namespacedata01'))
|
||||
self.assertIsInstance(reader.files(), MultiplexedPath)
|
||||
self.assertEqual(
|
||||
repr(reader.files()),
|
||||
"MultiplexedPath('{}')".format(root)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
256
vendor/importlib_resources/tests/test_resource.py
vendored
Normal file
256
vendor/importlib_resources/tests/test_resource.py
vendored
Normal file
@ -0,0 +1,256 @@
|
||||
import os.path
|
||||
import sys
|
||||
import unittest
|
||||
import importlib_resources as resources
|
||||
import uuid
|
||||
|
||||
from importlib_resources._compat import Path
|
||||
|
||||
from . import data01
|
||||
from . import zipdata01, zipdata02
|
||||
from . import util
|
||||
from importlib import import_module
|
||||
from ._compat import import_helper, unlink
|
||||
|
||||
|
||||
class ResourceTests:
|
||||
# Subclasses are expected to set the `data` attribute.
|
||||
|
||||
def test_is_resource_good_path(self):
|
||||
self.assertTrue(resources.is_resource(self.data, 'binary.file'))
|
||||
|
||||
def test_is_resource_missing(self):
|
||||
self.assertFalse(resources.is_resource(self.data, 'not-a-file'))
|
||||
|
||||
def test_is_resource_subresource_directory(self):
|
||||
# Directories are not resources.
|
||||
self.assertFalse(resources.is_resource(self.data, 'subdirectory'))
|
||||
|
||||
def test_contents(self):
|
||||
contents = set(resources.contents(self.data))
|
||||
# There may be cruft in the directory listing of the data directory.
|
||||
# Under Python 3 we could have a __pycache__ directory, and under
|
||||
# Python 2 we could have .pyc files. These are both artifacts of the
|
||||
# test suite importing these modules and writing these caches. They
|
||||
# aren't germane to this test, so just filter them out.
|
||||
contents.discard('__pycache__')
|
||||
contents.discard('__init__.pyc')
|
||||
contents.discard('__init__.pyo')
|
||||
self.assertEqual(contents, {
|
||||
'__init__.py',
|
||||
'subdirectory',
|
||||
'utf-8.file',
|
||||
'binary.file',
|
||||
'utf-16.file',
|
||||
})
|
||||
|
||||
|
||||
class ResourceDiskTests(ResourceTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.data = data01
|
||||
|
||||
|
||||
class ResourceZipTests(ResourceTests, util.ZipSetup, unittest.TestCase):
|
||||
pass
|
||||
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3,), 'No ResourceReader in Python 2')
|
||||
class ResourceLoaderTests(unittest.TestCase):
|
||||
def test_resource_contents(self):
|
||||
package = util.create_package(
|
||||
file=data01, path=data01.__file__, contents=['A', 'B', 'C'])
|
||||
self.assertEqual(
|
||||
set(resources.contents(package)),
|
||||
{'A', 'B', 'C'})
|
||||
|
||||
def test_resource_is_resource(self):
|
||||
package = util.create_package(
|
||||
file=data01, path=data01.__file__,
|
||||
contents=['A', 'B', 'C', 'D/E', 'D/F'])
|
||||
self.assertTrue(resources.is_resource(package, 'B'))
|
||||
|
||||
def test_resource_directory_is_not_resource(self):
|
||||
package = util.create_package(
|
||||
file=data01, path=data01.__file__,
|
||||
contents=['A', 'B', 'C', 'D/E', 'D/F'])
|
||||
self.assertFalse(resources.is_resource(package, 'D'))
|
||||
|
||||
def test_resource_missing_is_not_resource(self):
|
||||
package = util.create_package(
|
||||
file=data01, path=data01.__file__,
|
||||
contents=['A', 'B', 'C', 'D/E', 'D/F'])
|
||||
self.assertFalse(resources.is_resource(package, 'Z'))
|
||||
|
||||
|
||||
class ResourceCornerCaseTests(unittest.TestCase):
|
||||
def test_package_has_no_reader_fallback(self):
|
||||
# Test odd ball packages which:
|
||||
# 1. Do not have a ResourceReader as a loader
|
||||
# 2. Are not on the file system
|
||||
# 3. Are not in a zip file
|
||||
module = util.create_package(
|
||||
file=data01, path=data01.__file__, contents=['A', 'B', 'C'])
|
||||
# Give the module a dummy loader.
|
||||
module.__loader__ = object()
|
||||
# Give the module a dummy origin.
|
||||
module.__file__ = '/path/which/shall/not/be/named'
|
||||
if sys.version_info >= (3,):
|
||||
module.__spec__.loader = module.__loader__
|
||||
module.__spec__.origin = module.__file__
|
||||
self.assertFalse(resources.is_resource(module, 'A'))
|
||||
|
||||
|
||||
class ResourceFromZipsTest01(util.ZipSetupBase, unittest.TestCase):
|
||||
ZIP_MODULE = zipdata01 # type: ignore
|
||||
|
||||
def test_is_submodule_resource(self):
|
||||
submodule = import_module('ziptestdata.subdirectory')
|
||||
self.assertTrue(
|
||||
resources.is_resource(submodule, 'binary.file'))
|
||||
|
||||
def test_read_submodule_resource_by_name(self):
|
||||
self.assertTrue(
|
||||
resources.is_resource('ziptestdata.subdirectory', 'binary.file'))
|
||||
|
||||
def test_submodule_contents(self):
|
||||
submodule = import_module('ziptestdata.subdirectory')
|
||||
self.assertEqual(
|
||||
set(resources.contents(submodule)),
|
||||
{'__init__.py', 'binary.file'})
|
||||
|
||||
def test_submodule_contents_by_name(self):
|
||||
self.assertEqual(
|
||||
set(resources.contents('ziptestdata.subdirectory')),
|
||||
{'__init__.py', 'binary.file'})
|
||||
|
||||
|
||||
class ResourceFromZipsTest02(util.ZipSetupBase, unittest.TestCase):
|
||||
ZIP_MODULE = zipdata02 # type: ignore
|
||||
|
||||
def test_unrelated_contents(self):
|
||||
"""
|
||||
Test thata zip with two unrelated subpackages return
|
||||
distinct resources. Ref python/importlib_resources#44.
|
||||
"""
|
||||
self.assertEqual(
|
||||
set(resources.contents('ziptestdata.one')),
|
||||
{'__init__.py', 'resource1.txt'})
|
||||
self.assertEqual(
|
||||
set(resources.contents('ziptestdata.two')),
|
||||
{'__init__.py', 'resource2.txt'})
|
||||
|
||||
|
||||
class DeletingZipsTest(unittest.TestCase):
|
||||
"""Having accessed resources in a zip file should not keep an open
|
||||
reference to the zip.
|
||||
"""
|
||||
ZIP_MODULE = zipdata01
|
||||
|
||||
def setUp(self):
|
||||
modules = import_helper.modules_setup()
|
||||
self.addCleanup(import_helper.modules_cleanup, *modules)
|
||||
|
||||
data_path = Path(self.ZIP_MODULE.__file__)
|
||||
data_dir = data_path.parent
|
||||
self.source_zip_path = data_dir / 'ziptestdata.zip'
|
||||
self.zip_path = Path('{}.zip'.format(uuid.uuid4())).absolute()
|
||||
self.zip_path.write_bytes(self.source_zip_path.read_bytes())
|
||||
sys.path.append(str(self.zip_path))
|
||||
self.data = import_module('ziptestdata')
|
||||
|
||||
def tearDown(self):
|
||||
try:
|
||||
sys.path.remove(str(self.zip_path))
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
try:
|
||||
del sys.path_importer_cache[str(self.zip_path)]
|
||||
del sys.modules[self.data.__name__]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
try:
|
||||
unlink(self.zip_path)
|
||||
except OSError:
|
||||
# If the test fails, this will probably fail too
|
||||
pass
|
||||
|
||||
def test_contents_does_not_keep_open(self):
|
||||
c = resources.contents('ziptestdata')
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
def test_is_resource_does_not_keep_open(self):
|
||||
c = resources.is_resource('ziptestdata', 'binary.file')
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
def test_is_resource_failure_does_not_keep_open(self):
|
||||
c = resources.is_resource('ziptestdata', 'not-present')
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
@unittest.skip("Desired but not supported.")
|
||||
def test_path_does_not_keep_open(self):
|
||||
c = resources.path('ziptestdata', 'binary.file')
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
def test_entered_path_does_not_keep_open(self):
|
||||
# This is what certifi does on import to make its bundle
|
||||
# available for the process duration.
|
||||
c = resources.path('ziptestdata', 'binary.file').__enter__()
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
def test_read_binary_does_not_keep_open(self):
|
||||
c = resources.read_binary('ziptestdata', 'binary.file')
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
def test_read_text_does_not_keep_open(self):
|
||||
c = resources.read_text('ziptestdata', 'utf-8.file', encoding='utf-8')
|
||||
self.zip_path.unlink()
|
||||
del c
|
||||
|
||||
|
||||
@unittest.skipUnless(
|
||||
sys.version_info[0] >= 3,
|
||||
'namespace packages not available on Python 2'
|
||||
)
|
||||
class ResourceFromNamespaceTest01(unittest.TestCase):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
sys.path.append(os.path.abspath(os.path.join(__file__, '..')))
|
||||
|
||||
def test_is_submodule_resource(self):
|
||||
self.assertTrue(
|
||||
resources.is_resource(
|
||||
import_module('namespacedata01'), 'binary.file'))
|
||||
|
||||
def test_read_submodule_resource_by_name(self):
|
||||
self.assertTrue(
|
||||
resources.is_resource('namespacedata01', 'binary.file'))
|
||||
|
||||
def test_submodule_contents(self):
|
||||
contents = set(resources.contents(import_module('namespacedata01')))
|
||||
try:
|
||||
contents.remove('__pycache__')
|
||||
except KeyError:
|
||||
pass
|
||||
self.assertEqual(
|
||||
contents, {'binary.file', 'utf-8.file', 'utf-16.file'})
|
||||
|
||||
def test_submodule_contents_by_name(self):
|
||||
contents = set(resources.contents('namespacedata01'))
|
||||
try:
|
||||
contents.remove('__pycache__')
|
||||
except KeyError:
|
||||
pass
|
||||
self.assertEqual(
|
||||
contents, {'binary.file', 'utf-8.file', 'utf-16.file'})
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
190
vendor/importlib_resources/tests/util.py
vendored
Normal file
190
vendor/importlib_resources/tests/util.py
vendored
Normal file
@ -0,0 +1,190 @@
|
||||
import abc
|
||||
import importlib
|
||||
import io
|
||||
import sys
|
||||
import types
|
||||
import unittest
|
||||
|
||||
from . import data01
|
||||
from . import zipdata01
|
||||
from .._compat import ABC, Path, PurePath, FileNotFoundError
|
||||
from ..abc import ResourceReader
|
||||
from ._compat import import_helper
|
||||
|
||||
|
||||
try:
|
||||
from importlib.machinery import ModuleSpec
|
||||
except ImportError:
|
||||
ModuleSpec = None # type: ignore
|
||||
|
||||
|
||||
def create_package(file, path, is_package=True, contents=()):
|
||||
class Reader(ResourceReader):
|
||||
def get_resource_reader(self, package):
|
||||
return self
|
||||
|
||||
def open_resource(self, path):
|
||||
self._path = path
|
||||
if isinstance(file, Exception):
|
||||
raise file
|
||||
else:
|
||||
return file
|
||||
|
||||
def resource_path(self, path_):
|
||||
self._path = path_
|
||||
if isinstance(path, Exception):
|
||||
raise path
|
||||
else:
|
||||
return path
|
||||
|
||||
def is_resource(self, path_):
|
||||
self._path = path_
|
||||
if isinstance(path, Exception):
|
||||
raise path
|
||||
for entry in contents:
|
||||
parts = entry.split('/')
|
||||
if len(parts) == 1 and parts[0] == path_:
|
||||
return True
|
||||
return False
|
||||
|
||||
def contents(self):
|
||||
if isinstance(path, Exception):
|
||||
raise path
|
||||
# There's no yield from in baseball, er, Python 2.
|
||||
for entry in contents:
|
||||
yield entry
|
||||
|
||||
name = 'testingpackage'
|
||||
# Unforunately importlib.util.module_from_spec() was not introduced until
|
||||
# Python 3.5.
|
||||
module = types.ModuleType(name)
|
||||
if ModuleSpec is None:
|
||||
# Python 2.
|
||||
module.__name__ = name
|
||||
module.__file__ = 'does-not-exist'
|
||||
if is_package:
|
||||
module.__path__ = []
|
||||
else:
|
||||
# Python 3.
|
||||
loader = Reader()
|
||||
spec = ModuleSpec(
|
||||
name, loader,
|
||||
origin='does-not-exist',
|
||||
is_package=is_package)
|
||||
module.__spec__ = spec
|
||||
module.__loader__ = loader
|
||||
return module
|
||||
|
||||
|
||||
class CommonTests(ABC):
|
||||
|
||||
@abc.abstractmethod
|
||||
def execute(self, package, path):
|
||||
raise NotImplementedError
|
||||
|
||||
def test_package_name(self):
|
||||
# Passing in the package name should succeed.
|
||||
self.execute(data01.__name__, 'utf-8.file')
|
||||
|
||||
def test_package_object(self):
|
||||
# Passing in the package itself should succeed.
|
||||
self.execute(data01, 'utf-8.file')
|
||||
|
||||
def test_string_path(self):
|
||||
# Passing in a string for the path should succeed.
|
||||
path = 'utf-8.file'
|
||||
self.execute(data01, path)
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3, 6), 'requires os.PathLike support')
|
||||
def test_pathlib_path(self):
|
||||
# Passing in a pathlib.PurePath object for the path should succeed.
|
||||
path = PurePath('utf-8.file')
|
||||
self.execute(data01, path)
|
||||
|
||||
def test_absolute_path(self):
|
||||
# An absolute path is a ValueError.
|
||||
path = Path(__file__)
|
||||
full_path = path.parent/'utf-8.file'
|
||||
with self.assertRaises(ValueError):
|
||||
self.execute(data01, full_path)
|
||||
|
||||
def test_relative_path(self):
|
||||
# A reative path is a ValueError.
|
||||
with self.assertRaises(ValueError):
|
||||
self.execute(data01, '../data01/utf-8.file')
|
||||
|
||||
def test_importing_module_as_side_effect(self):
|
||||
# The anchor package can already be imported.
|
||||
del sys.modules[data01.__name__]
|
||||
self.execute(data01.__name__, 'utf-8.file')
|
||||
|
||||
def test_non_package_by_name(self):
|
||||
# The anchor package cannot be a module.
|
||||
with self.assertRaises(TypeError):
|
||||
self.execute(__name__, 'utf-8.file')
|
||||
|
||||
def test_non_package_by_package(self):
|
||||
# The anchor package cannot be a module.
|
||||
with self.assertRaises(TypeError):
|
||||
module = sys.modules['importlib_resources.tests.util']
|
||||
self.execute(module, 'utf-8.file')
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3,), 'No ResourceReader in Python 2')
|
||||
def test_resource_opener(self):
|
||||
bytes_data = io.BytesIO(b'Hello, world!')
|
||||
package = create_package(file=bytes_data, path=FileNotFoundError())
|
||||
self.execute(package, 'utf-8.file')
|
||||
self.assertEqual(package.__loader__._path, 'utf-8.file')
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3,), 'No ResourceReader in Python 2')
|
||||
def test_resource_path(self):
|
||||
bytes_data = io.BytesIO(b'Hello, world!')
|
||||
path = __file__
|
||||
package = create_package(file=bytes_data, path=path)
|
||||
self.execute(package, 'utf-8.file')
|
||||
self.assertEqual(package.__loader__._path, 'utf-8.file')
|
||||
|
||||
def test_useless_loader(self):
|
||||
package = create_package(file=FileNotFoundError(),
|
||||
path=FileNotFoundError())
|
||||
with self.assertRaises(FileNotFoundError):
|
||||
self.execute(package, 'utf-8.file')
|
||||
|
||||
|
||||
class ZipSetupBase:
|
||||
ZIP_MODULE = None
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
data_path = Path(cls.ZIP_MODULE.__file__)
|
||||
data_dir = data_path.parent
|
||||
cls._zip_path = str(data_dir / 'ziptestdata.zip')
|
||||
sys.path.append(cls._zip_path)
|
||||
cls.data = importlib.import_module('ziptestdata')
|
||||
|
||||
@classmethod
|
||||
def tearDownClass(cls):
|
||||
try:
|
||||
sys.path.remove(cls._zip_path)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
try:
|
||||
del sys.path_importer_cache[cls._zip_path]
|
||||
del sys.modules[cls.data.__name__]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
try:
|
||||
del cls.data
|
||||
del cls._zip_path
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
def setUp(self):
|
||||
modules = import_helper.modules_setup()
|
||||
self.addCleanup(import_helper.modules_cleanup, *modules)
|
||||
|
||||
|
||||
class ZipSetup(ZipSetupBase):
|
||||
ZIP_MODULE = zipdata01 # type: ignore
|
0
vendor/importlib_resources/tests/zipdata01/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/zipdata01/__init__.py
vendored
Normal file
BIN
vendor/importlib_resources/tests/zipdata01/ziptestdata.zip
vendored
Normal file
BIN
vendor/importlib_resources/tests/zipdata01/ziptestdata.zip
vendored
Normal file
Binary file not shown.
0
vendor/importlib_resources/tests/zipdata02/__init__.py
vendored
Normal file
0
vendor/importlib_resources/tests/zipdata02/__init__.py
vendored
Normal file
BIN
vendor/importlib_resources/tests/zipdata02/ziptestdata.zip
vendored
Normal file
BIN
vendor/importlib_resources/tests/zipdata02/ziptestdata.zip
vendored
Normal file
Binary file not shown.
6
vendor/importlib_resources/trees.py
vendored
Normal file
6
vendor/importlib_resources/trees.py
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
# for compatibility with 1.1, continue to expose as_file here.
|
||||
|
||||
from ._common import as_file
|
||||
|
||||
|
||||
__all__ = ['as_file']
|
5
vendor/netaddr-0.8.0.dist-info/AUTHORS
vendored
Normal file
5
vendor/netaddr-0.8.0.dist-info/AUTHORS
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
- David P. D. Moss (author, maintainer) drkjam@gmail.com
|
||||
- Stefan Nordhausen (maintainer) stefan.nordhausen@immobilienscout24.de
|
||||
- Jakub Stasiak (maintainer) jakub@stasiak.at
|
||||
|
||||
Released under the BSD License (see :doc:`license` for details).
|
1
vendor/netaddr-0.8.0.dist-info/INSTALLER
vendored
Normal file
1
vendor/netaddr-0.8.0.dist-info/INSTALLER
vendored
Normal file
@ -0,0 +1 @@
|
||||
pip
|
36
vendor/netaddr-0.8.0.dist-info/LICENSE
vendored
Normal file
36
vendor/netaddr-0.8.0.dist-info/LICENSE
vendored
Normal file
@ -0,0 +1,36 @@
|
||||
Here are the licenses applicable to the use of the netaddr library.
|
||||
|
||||
-------
|
||||
netaddr
|
||||
-------
|
||||
|
||||
COPYRIGHT AND LICENSE
|
||||
|
||||
Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
* Neither the name of David P. D. Moss nor the names of contributors
|
||||
may be used to endorse or promote products derived from this
|
||||
software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
118
vendor/netaddr-0.8.0.dist-info/METADATA
vendored
Normal file
118
vendor/netaddr-0.8.0.dist-info/METADATA
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: netaddr
|
||||
Version: 0.8.0
|
||||
Summary: A network address manipulation library for Python
|
||||
Home-page: https://github.com/drkjam/netaddr/
|
||||
Author: David P. D. Moss, Stefan Nordhausen et al
|
||||
Author-email: drkjam@gmail.com
|
||||
License: BSD License
|
||||
Download-URL: https://pypi.org/project/netaddr/
|
||||
Keywords: Networking,Systems Administration,IANA,IEEE,CIDR,IP,IPv4,IPv6,CIDR,EUI,MAC,MAC-48,EUI-48,EUI-64
|
||||
Platform: OS Independent
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Environment :: Console
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Intended Audience :: Education
|
||||
Classifier: Intended Audience :: Information Technology
|
||||
Classifier: Intended Audience :: Science/Research
|
||||
Classifier: Intended Audience :: System Administrators
|
||||
Classifier: Intended Audience :: Telecommunications Industry
|
||||
Classifier: License :: OSI Approved :: BSD License
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Natural Language :: English
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Topic :: Communications
|
||||
Classifier: Topic :: Documentation
|
||||
Classifier: Topic :: Education
|
||||
Classifier: Topic :: Education :: Testing
|
||||
Classifier: Topic :: Home Automation
|
||||
Classifier: Topic :: Internet
|
||||
Classifier: Topic :: Internet :: Log Analysis
|
||||
Classifier: Topic :: Internet :: Name Service (DNS)
|
||||
Classifier: Topic :: Internet :: Proxy Servers
|
||||
Classifier: Topic :: Internet :: WWW/HTTP
|
||||
Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
|
||||
Classifier: Topic :: Internet :: WWW/HTTP :: Site Management
|
||||
Classifier: Topic :: Security
|
||||
Classifier: Topic :: Software Development
|
||||
Classifier: Topic :: Software Development :: Libraries
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Topic :: Software Development :: Quality Assurance
|
||||
Classifier: Topic :: Software Development :: Testing
|
||||
Classifier: Topic :: Software Development :: Testing :: Traffic Generation
|
||||
Classifier: Topic :: System :: Benchmark
|
||||
Classifier: Topic :: System :: Clustering
|
||||
Classifier: Topic :: System :: Distributed Computing
|
||||
Classifier: Topic :: System :: Installation/Setup
|
||||
Classifier: Topic :: System :: Logging
|
||||
Classifier: Topic :: System :: Monitoring
|
||||
Classifier: Topic :: System :: Networking
|
||||
Classifier: Topic :: System :: Networking :: Firewalls
|
||||
Classifier: Topic :: System :: Networking :: Monitoring
|
||||
Classifier: Topic :: System :: Networking :: Time Synchronization
|
||||
Classifier: Topic :: System :: Recovery Tools
|
||||
Classifier: Topic :: System :: Shells
|
||||
Classifier: Topic :: System :: Software Distribution
|
||||
Classifier: Topic :: System :: Systems Administration
|
||||
Classifier: Topic :: System :: System Shells
|
||||
Classifier: Topic :: Text Processing
|
||||
Classifier: Topic :: Text Processing :: Filters
|
||||
Classifier: Topic :: Utilities
|
||||
Requires-Dist: importlib-resources ; python_version < "3.7"
|
||||
|
||||
netaddr
|
||||
=======
|
||||
|
||||
A system-independent network address manipulation library for Python 2.7 and 3.5+.
|
||||
(Python 2.7 and 3.5 support is deprecated).
|
||||
|
||||
.. image:: https://codecov.io/gh/netaddr/netaddr/branch/master/graph/badge.svg
|
||||
:target: https://codecov.io/gh/netaddr/netaddr
|
||||
.. image:: https://github.com/netaddr/netaddr/workflows/CI/badge.svg
|
||||
:target: https://github.com/netaddr/netaddr/actions?query=workflow%3ACI+branch%3Amaster
|
||||
.. image:: https://img.shields.io/pypi/v/netaddr.svg
|
||||
:target: https://pypi.org/project/netaddr/
|
||||
.. image:: https://img.shields.io/pypi/pyversions/netaddr.svg
|
||||
:target: pypi.python.org/pypi/netaddr
|
||||
|
||||
Provides support for:
|
||||
|
||||
Layer 3 addresses
|
||||
|
||||
- IPv4 and IPv6 addresses, subnets, masks, prefixes
|
||||
- iterating, slicing, sorting, summarizing and classifying IP networks
|
||||
- dealing with various ranges formats (CIDR, arbitrary ranges and
|
||||
globs, nmap)
|
||||
- set based operations (unions, intersections etc) over IP addresses
|
||||
and subnets
|
||||
- parsing a large variety of different formats and notations
|
||||
- looking up IANA IP block information
|
||||
- generating DNS reverse lookups
|
||||
- supernetting and subnetting
|
||||
|
||||
Layer 2 addresses
|
||||
|
||||
- representation and manipulation MAC addresses and EUI-64 identifiers
|
||||
- looking up IEEE organisational information (OUI, IAB)
|
||||
- generating derived IPv6 addresses
|
||||
|
||||
Starting with Python 3.3 there's an `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
|
||||
module in the Python standard library which provides layer 3 address manipulation
|
||||
capabilities overlapping ``netaddr``.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
Latest documentation https://netaddr.readthedocs.io/en/latest/
|
||||
|
||||
Share and enjoy!
|
||||
|
||||
|
58
vendor/netaddr-0.8.0.dist-info/RECORD
vendored
Normal file
58
vendor/netaddr-0.8.0.dist-info/RECORD
vendored
Normal file
@ -0,0 +1,58 @@
|
||||
../../bin/netaddr,sha256=Vur2dSZN9Xcjeo6hltTm7xF_0_kCv-sihPegDrXZm5c,258
|
||||
netaddr-0.8.0.dist-info/AUTHORS,sha256=ukJIe5KKm4I3UsXdPRTflQj-wISd8--amGiTe8PAVco,241
|
||||
netaddr-0.8.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
netaddr-0.8.0.dist-info/LICENSE,sha256=DlPeYlR3h0YvQe77XO4xoU9-p2e6A2LG-TBPF0JIbUc,1606
|
||||
netaddr-0.8.0.dist-info/METADATA,sha256=0kO1sE_BkLIoTmbDur34u3t45VoiqdBIG9LqX7fV168,4884
|
||||
netaddr-0.8.0.dist-info/RECORD,,
|
||||
netaddr-0.8.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
netaddr-0.8.0.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110
|
||||
netaddr-0.8.0.dist-info/entry_points.txt,sha256=TGfqa95bQEnE9BUe3rrr0Hq1Y1rw49ZVkVDjKg7nE90,46
|
||||
netaddr-0.8.0.dist-info/top_level.txt,sha256=GlouRmUZSQCg0kloRWKY6qxTHAsQo_rE8QvlxYBHwOE,8
|
||||
netaddr/__init__.py,sha256=de86OhloTul8BJpkkwh1yn4iK-tEVRRGm_kXIfNXcE8,1849
|
||||
netaddr/__init__.pyc,,
|
||||
netaddr/cli.py,sha256=_r5UPGR2hAZhyD4d_KbmHQDBz_eRoWFlqFJ3xtcLxAc,1278
|
||||
netaddr/cli.pyc,,
|
||||
netaddr/compat.py,sha256=o4F9ROM_oAz4tIj8qeGB6viWBaXVxyOEtTPy6J4svA0,2209
|
||||
netaddr/compat.pyc,,
|
||||
netaddr/contrib/__init__.py,sha256=tHDAWK5T0l3IxMjlnlXRZmBVEWe_JnYu6GmdSeP8slQ,567
|
||||
netaddr/contrib/__init__.pyc,,
|
||||
netaddr/contrib/subnet_splitter.py,sha256=FloAxzvcMMXXHaesGJiyJWug1R8Wrxh-00Cvv6UfDAY,1769
|
||||
netaddr/contrib/subnet_splitter.pyc,,
|
||||
netaddr/core.py,sha256=AhQhwutql11wSUWN5_ip6cM9gHjP_zr3c7I3BIrCwIw,5898
|
||||
netaddr/core.pyc,,
|
||||
netaddr/eui/__init__.py,sha256=tET2m6vht-x8WUStqv7wOfVEPtPfYPZBppcAm8-5np8,25688
|
||||
netaddr/eui/__init__.pyc,,
|
||||
netaddr/eui/iab.idx,sha256=sQ-fBZjeCGVbYT4jYwPMzMsktJ70b8kQJhr58NVpK1g,95464
|
||||
netaddr/eui/iab.txt,sha256=gXX96cz8zxlCvWd7SNwRiwmf6OLsbnvOVcDoNGNUMLQ,2453223
|
||||
netaddr/eui/ieee.py,sha256=f0SxL3s6jxXysovoCw948vFzW_qDY8Q8M7p0v1FXZ6U,9303
|
||||
netaddr/eui/ieee.pyc,,
|
||||
netaddr/eui/oui.idx,sha256=hE0dZwt-G72WK569n7AphTpwyUNlsPdsRJ90r8SUPNI,523486
|
||||
netaddr/eui/oui.txt,sha256=w6qHufJSUn1ZQEx2TcqW11gP39McK5c6NTumiFIYK0E,4458411
|
||||
netaddr/fbsocket.py,sha256=L2yZtJVBhAR-Hxfx5pROfJiE2V9XkjLtYf6ONElUJnE,8247
|
||||
netaddr/fbsocket.pyc,,
|
||||
netaddr/ip/__init__.py,sha256=rzSNhw4H_ElQSD8-Gmm3bqbMqxGB4NdV2WijqksEs1Y,68492
|
||||
netaddr/ip/__init__.pyc,,
|
||||
netaddr/ip/glob.py,sha256=tlxePyqAA_JrNGj2A8_9vRgzbMIxJXScs7A0Sx1xNhI,10474
|
||||
netaddr/ip/glob.pyc,,
|
||||
netaddr/ip/iana.py,sha256=Cdzo4XuKq6cKJB6JxrM9SevKz9afMHrLOseEkdFQghw,14052
|
||||
netaddr/ip/iana.pyc,,
|
||||
netaddr/ip/ipv4-address-space.xml,sha256=QkOuS1T5_UYzzCehV-c715ukZkipxSersFJnAZXwd0E,75998
|
||||
netaddr/ip/ipv6-address-space.xml,sha256=U14bpxkCfEc5JCOogbw33uLCVbzZd49WmgXxD89Atxk,9581
|
||||
netaddr/ip/ipv6-unicast-address-assignments.xml,sha256=NTc9d0XDdSIxzmeoB37mfNY46r2Mig6aOCJh0k8Ydiw,14852
|
||||
netaddr/ip/multicast-addresses.xml,sha256=uV2J89XT2rfvb9WopNUkETY5MlsB5a8Ct9mZOKCY1qI,153025
|
||||
netaddr/ip/nmap.py,sha256=6OEyTSrxUUNvEZdhX0I27c9aV7GPKf1oJMT50toS_ro,4078
|
||||
netaddr/ip/nmap.pyc,,
|
||||
netaddr/ip/rfc1924.py,sha256=GCLZKg6VUhBuqZlN6JHPP-M0VaaEVysoyhx4AeZTNUM,1750
|
||||
netaddr/ip/rfc1924.pyc,,
|
||||
netaddr/ip/sets.py,sha256=SG_aO9KJ13Td3YPqsFvlmRDceMwblDWcSTER_wly54Q,26586
|
||||
netaddr/ip/sets.pyc,,
|
||||
netaddr/strategy/__init__.py,sha256=QI-vJBK4wCJGp-c6iQSkbRDE4wqy1YW44ioFa2pCJ3U,7479
|
||||
netaddr/strategy/__init__.pyc,,
|
||||
netaddr/strategy/eui48.py,sha256=w-ClUG8ie34GsVfNY8vHVXiTWU303v_RPjP76s_HynE,8640
|
||||
netaddr/strategy/eui48.pyc,,
|
||||
netaddr/strategy/eui64.py,sha256=nipcYqN5XYWu7L8gEkbeVdfqBU5CJ592uz3yFSIlD_k,7711
|
||||
netaddr/strategy/eui64.pyc,,
|
||||
netaddr/strategy/ipv4.py,sha256=T-dK0926vh-FWso2M7GLNTliKSteLzbZfEH37tjupHc,8095
|
||||
netaddr/strategy/ipv4.pyc,,
|
||||
netaddr/strategy/ipv6.py,sha256=OuwhcXbtReuI6KYtO6Nsc9kdEjThVqmY4XmvnTKBunU,7632
|
||||
netaddr/strategy/ipv6.pyc,,
|
0
vendor/netaddr-0.8.0.dist-info/REQUESTED
vendored
Normal file
0
vendor/netaddr-0.8.0.dist-info/REQUESTED
vendored
Normal file
6
vendor/netaddr-0.8.0.dist-info/WHEEL
vendored
Normal file
6
vendor/netaddr-0.8.0.dist-info/WHEEL
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.34.2)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
3
vendor/netaddr-0.8.0.dist-info/entry_points.txt
vendored
Normal file
3
vendor/netaddr-0.8.0.dist-info/entry_points.txt
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
[console_scripts]
|
||||
netaddr = netaddr.cli:main
|
||||
|
1
vendor/netaddr-0.8.0.dist-info/top_level.txt
vendored
Normal file
1
vendor/netaddr-0.8.0.dist-info/top_level.txt
vendored
Normal file
@ -0,0 +1 @@
|
||||
netaddr
|
48
vendor/netaddr/__init__.py
vendored
Normal file
48
vendor/netaddr/__init__.py
vendored
Normal file
@ -0,0 +1,48 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""A Python library for manipulating IP and EUI network addresses."""
|
||||
|
||||
#: Version info (major, minor, maintenance, status)
|
||||
__version__ = '0.8.0'
|
||||
VERSION = tuple(int(part) for part in __version__.split('.'))
|
||||
STATUS = ''
|
||||
|
||||
import sys as _sys
|
||||
|
||||
if _sys.version_info[0:2] < (2, 4):
|
||||
raise RuntimeError('Python 2.4.x or higher is required!')
|
||||
|
||||
from netaddr.core import (AddrConversionError, AddrFormatError,
|
||||
NotRegisteredError, ZEROFILL, Z, INET_PTON, P, NOHOST, N)
|
||||
|
||||
from netaddr.ip import (IPAddress, IPNetwork, IPRange, all_matching_cidrs,
|
||||
cidr_abbrev_to_verbose, cidr_exclude, cidr_merge, iprange_to_cidrs,
|
||||
iter_iprange, iter_unique_ips, largest_matching_cidr,
|
||||
smallest_matching_cidr, spanning_cidr)
|
||||
|
||||
from netaddr.ip.sets import IPSet
|
||||
|
||||
from netaddr.ip.glob import (IPGlob, cidr_to_glob, glob_to_cidrs,
|
||||
glob_to_iprange, glob_to_iptuple, iprange_to_globs, valid_glob)
|
||||
|
||||
from netaddr.ip.nmap import valid_nmap_range, iter_nmap_range
|
||||
|
||||
from netaddr.ip.rfc1924 import base85_to_ipv6, ipv6_to_base85
|
||||
|
||||
from netaddr.eui import EUI, IAB, OUI
|
||||
|
||||
from netaddr.strategy.ipv4 import valid_str as valid_ipv4
|
||||
|
||||
from netaddr.strategy.ipv6 import (valid_str as valid_ipv6, ipv6_compact,
|
||||
ipv6_full, ipv6_verbose)
|
||||
|
||||
from netaddr.strategy.eui48 import (mac_eui48, mac_unix, mac_unix_expanded,
|
||||
mac_cisco, mac_bare, mac_pgsql, valid_str as valid_mac)
|
||||
|
||||
from netaddr.strategy.eui64 import (eui64_base, eui64_unix, eui64_unix_expanded,
|
||||
eui64_cisco, eui64_bare, valid_str as valid_eui64)
|
||||
|
||||
from netaddr.contrib.subnet_splitter import SubnetSplitter
|
42
vendor/netaddr/cli.py
vendored
Normal file
42
vendor/netaddr/cli.py
vendored
Normal file
@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env python
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""an interactive shell for the netaddr library"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import netaddr
|
||||
from netaddr import *
|
||||
|
||||
# aliases to save some typing ...
|
||||
from netaddr import IPAddress as IP, IPNetwork as CIDR
|
||||
from netaddr import EUI as MAC
|
||||
|
||||
def main():
|
||||
argv = sys.argv[1:]
|
||||
|
||||
banner = "\nnetaddr shell %s - %s\n" % (netaddr.__version__, __doc__)
|
||||
exit_msg = "\nShare and enjoy!"
|
||||
rc_override = None
|
||||
|
||||
try:
|
||||
try:
|
||||
# ipython >= 0.11
|
||||
from IPython.terminal.embed import InteractiveShellEmbed
|
||||
ipshell = InteractiveShellEmbed(banner1=banner, exit_msg=exit_msg)
|
||||
except ImportError:
|
||||
# ipython < 0.11
|
||||
from IPython.Shell import IPShellEmbed
|
||||
ipshell = IPShellEmbed(argv, banner, exit_msg, rc_override)
|
||||
except ImportError:
|
||||
sys.stderr.write('IPython (http://ipython.scipy.org/) not found!\n')
|
||||
sys.exit(1)
|
||||
|
||||
ipshell()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
93
vendor/netaddr/compat.py
vendored
Normal file
93
vendor/netaddr/compat.py
vendored
Normal file
@ -0,0 +1,93 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Compatibility wrappers providing uniform behaviour for Python code required to
|
||||
run under both Python 2.x and 3.x.
|
||||
|
||||
All operations emulate 2.x behaviour where applicable.
|
||||
"""
|
||||
import sys as _sys
|
||||
|
||||
if _sys.version_info[0] == 3:
|
||||
# Python 3.x specific logic.
|
||||
_sys_maxint = _sys.maxsize
|
||||
|
||||
_int_type = int
|
||||
|
||||
_str_type = str
|
||||
|
||||
_bytes_type = lambda x: bytes(x, 'UTF-8')
|
||||
|
||||
_is_str = lambda x: isinstance(x, (str, type(''.encode())))
|
||||
|
||||
_is_int = lambda x: isinstance(x, int)
|
||||
|
||||
_callable = lambda x: hasattr(x, '__call__')
|
||||
|
||||
_dict_keys = lambda x: list(x.keys())
|
||||
|
||||
_dict_items = lambda x: list(x.items())
|
||||
|
||||
_iter_dict_keys = lambda x: x.keys()
|
||||
|
||||
def _bytes_join(*args):
|
||||
return ''.encode().join(*args)
|
||||
|
||||
def _zip(*args):
|
||||
return list(zip(*args))
|
||||
|
||||
def _range(*args, **kwargs):
|
||||
return list(range(*args, **kwargs))
|
||||
|
||||
_iter_range = range
|
||||
|
||||
def _iter_next(x):
|
||||
return next(x)
|
||||
|
||||
elif _sys.version_info[0:2] > [2, 3]:
|
||||
# Python 2.4 or higher.
|
||||
_sys_maxint = _sys.maxint
|
||||
|
||||
_int_type = (int, long)
|
||||
|
||||
_str_type = basestring
|
||||
|
||||
_bytes_type = str
|
||||
|
||||
_is_str = lambda x: isinstance(x, basestring)
|
||||
|
||||
_is_int = lambda x: isinstance(x, (int, long))
|
||||
|
||||
_callable = lambda x: callable(x)
|
||||
|
||||
_dict_keys = lambda x: x.keys()
|
||||
|
||||
_dict_items = lambda x: x.items()
|
||||
|
||||
_iter_dict_keys = lambda x: iter(x.keys())
|
||||
|
||||
def _bytes_join(*args):
|
||||
return ''.join(*args)
|
||||
|
||||
def _zip(*args):
|
||||
return zip(*args)
|
||||
|
||||
def _range(*args, **kwargs):
|
||||
return range(*args, **kwargs)
|
||||
|
||||
_iter_range = xrange
|
||||
|
||||
def _iter_next(x):
|
||||
return x.next()
|
||||
else:
|
||||
# Unsupported versions.
|
||||
raise RuntimeError(
|
||||
'this module only supports Python 2.4.x or higher (including 3.x)!')
|
||||
|
||||
try:
|
||||
from importlib import resources as _importlib_resources
|
||||
except ImportError:
|
||||
import importlib_resources as _importlib_resources
|
12
vendor/netaddr/contrib/__init__.py
vendored
Normal file
12
vendor/netaddr/contrib/__init__.py
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
The netaddr.contrib namespace for non-core code contributed by users.
|
||||
|
||||
It is a testing ground for new ideas. Depending on the interest in
|
||||
functionality found here, code may find its way into the core in various
|
||||
ways, either as is or as additions to existing APIs.
|
||||
"""
|
46
vendor/netaddr/contrib/subnet_splitter.py
vendored
Normal file
46
vendor/netaddr/contrib/subnet_splitter.py
vendored
Normal file
@ -0,0 +1,46 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
from netaddr.ip import IPNetwork, cidr_exclude, cidr_merge
|
||||
|
||||
|
||||
class SubnetSplitter(object):
|
||||
"""
|
||||
A handy utility class that takes a single (large) subnet and allows
|
||||
smaller subnet within its range to be extracted by CIDR prefix. Any
|
||||
leaving address space is available for subsequent extractions until
|
||||
all space is exhausted.
|
||||
"""
|
||||
def __init__(self, base_cidr):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param base_cidr: an IPv4 or IPv6 address with a CIDR prefix.
|
||||
(see IPNetwork.__init__ for full details).
|
||||
"""
|
||||
self._subnets = set([IPNetwork(base_cidr)])
|
||||
|
||||
def extract_subnet(self, prefix, count=None):
|
||||
"""Extract 1 or more subnets of size specified by CIDR prefix."""
|
||||
for cidr in self.available_subnets():
|
||||
subnets = list(cidr.subnet(prefix, count=count))
|
||||
if not subnets:
|
||||
continue
|
||||
self.remove_subnet(cidr)
|
||||
self._subnets = self._subnets.union(
|
||||
set(
|
||||
cidr_exclude(cidr, cidr_merge(subnets)[0])
|
||||
)
|
||||
)
|
||||
return subnets
|
||||
return []
|
||||
|
||||
def available_subnets(self):
|
||||
"""Returns a list of the currently available subnets."""
|
||||
return sorted(self._subnets, key=lambda x: x.prefixlen, reverse=True)
|
||||
|
||||
def remove_subnet(self, ip_network):
|
||||
"""Remove a specified IPNetwork from available address space."""
|
||||
self._subnets.remove(ip_network)
|
206
vendor/netaddr/core.py
vendored
Normal file
206
vendor/netaddr/core.py
vendored
Normal file
@ -0,0 +1,206 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""Common code shared between various netaddr sub modules"""
|
||||
|
||||
import sys as _sys
|
||||
import pprint as _pprint
|
||||
|
||||
from netaddr.compat import _callable, _iter_dict_keys
|
||||
|
||||
#: True if platform is natively big endian, False otherwise.
|
||||
BIG_ENDIAN_PLATFORM = _sys.byteorder == 'big'
|
||||
|
||||
#: Use inet_pton() semantics instead of inet_aton() when parsing IPv4.
|
||||
P = INET_PTON = 1
|
||||
|
||||
#: Remove any preceding zeros from IPv4 address octets before parsing.
|
||||
Z = ZEROFILL = 2
|
||||
|
||||
#: Remove any host bits found to the right of an applied CIDR prefix.
|
||||
N = NOHOST = 4
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
# Custom exceptions.
|
||||
#-----------------------------------------------------------------------------
|
||||
class AddrFormatError(Exception):
|
||||
"""
|
||||
An Exception indicating a network address is not correctly formatted.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class AddrConversionError(Exception):
|
||||
"""
|
||||
An Exception indicating a failure to convert between address types or
|
||||
notations.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class NotRegisteredError(Exception):
|
||||
"""
|
||||
An Exception indicating that an OUI or IAB was not found in the IEEE
|
||||
Registry.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
try:
|
||||
a = 42
|
||||
a.bit_length()
|
||||
# No exception, must be Python 2.7 or 3.1+ -> can use bit_length()
|
||||
def num_bits(int_val):
|
||||
"""
|
||||
:param int_val: an unsigned integer.
|
||||
|
||||
:return: the minimum number of bits needed to represent value provided.
|
||||
"""
|
||||
return int_val.bit_length()
|
||||
except AttributeError:
|
||||
# a.bit_length() excepted, must be an older Python version.
|
||||
def num_bits(int_val):
|
||||
"""
|
||||
:param int_val: an unsigned integer.
|
||||
|
||||
:return: the minimum number of bits needed to represent value provided.
|
||||
"""
|
||||
numbits = 0
|
||||
while int_val:
|
||||
numbits += 1
|
||||
int_val >>= 1
|
||||
return numbits
|
||||
|
||||
|
||||
class Subscriber(object):
|
||||
"""
|
||||
An abstract class defining the interface expected by a Publisher.
|
||||
"""
|
||||
|
||||
def update(self, data):
|
||||
"""
|
||||
A callback method used by a Publisher to notify this Subscriber about
|
||||
updates.
|
||||
|
||||
:param data: a Python object containing data provided by Publisher.
|
||||
"""
|
||||
raise NotImplementedError('cannot invoke virtual method!')
|
||||
|
||||
|
||||
class PrettyPrinter(Subscriber):
|
||||
"""
|
||||
A concrete Subscriber that employs the pprint in the standard library to
|
||||
format all data from updates received, writing them to a file-like
|
||||
object.
|
||||
|
||||
Useful as a debugging aid.
|
||||
"""
|
||||
|
||||
def __init__(self, fh=_sys.stdout, write_eol=True):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param fh: a file-like object to write updates to.
|
||||
Default: sys.stdout.
|
||||
|
||||
|
||||
:param write_eol: if ``True`` this object will write newlines to
|
||||
output, if ``False`` it will not.
|
||||
"""
|
||||
self.fh = fh
|
||||
self.write_eol = write_eol
|
||||
|
||||
def update(self, data):
|
||||
"""
|
||||
A callback method used by a Publisher to notify this Subscriber about
|
||||
updates.
|
||||
|
||||
:param data: a Python object containing data provided by Publisher.
|
||||
"""
|
||||
self.fh.write(_pprint.pformat(data))
|
||||
if self.write_eol:
|
||||
self.fh.write("\n")
|
||||
|
||||
|
||||
class Publisher(object):
|
||||
"""
|
||||
A 'push' Publisher that maintains a list of Subscriber objects notifying
|
||||
them of state changes by passing them update data when it encounter events
|
||||
of interest.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Constructor"""
|
||||
self.subscribers = []
|
||||
|
||||
def attach(self, subscriber):
|
||||
"""
|
||||
Add a new subscriber.
|
||||
|
||||
:param subscriber: a new object that implements the Subscriber object
|
||||
interface.
|
||||
"""
|
||||
if hasattr(subscriber, 'update') and _callable(eval('subscriber.update')):
|
||||
if subscriber not in self.subscribers:
|
||||
self.subscribers.append(subscriber)
|
||||
else:
|
||||
raise TypeError('%r does not support required interface!' % subscriber)
|
||||
|
||||
def detach(self, subscriber):
|
||||
"""
|
||||
Remove an existing subscriber.
|
||||
|
||||
:param subscriber: a new object that implements the Subscriber object
|
||||
interface.
|
||||
"""
|
||||
try:
|
||||
self.subscribers.remove(subscriber)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
def notify(self, data):
|
||||
"""
|
||||
Send update data to to all registered Subscribers.
|
||||
|
||||
:param data: the data to be passed to each registered Subscriber.
|
||||
"""
|
||||
for subscriber in self.subscribers:
|
||||
subscriber.update(data)
|
||||
|
||||
|
||||
class DictDotLookup(object):
|
||||
"""
|
||||
Creates objects that behave much like a dictionaries, but allow nested
|
||||
key access using object '.' (dot) lookups.
|
||||
|
||||
Recipe 576586: Dot-style nested lookups over dictionary based data
|
||||
structures - http://code.activestate.com/recipes/576586/
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, d):
|
||||
for k in d:
|
||||
if isinstance(d[k], dict):
|
||||
self.__dict__[k] = DictDotLookup(d[k])
|
||||
elif isinstance(d[k], (list, tuple)):
|
||||
l = []
|
||||
for v in d[k]:
|
||||
if isinstance(v, dict):
|
||||
l.append(DictDotLookup(v))
|
||||
else:
|
||||
l.append(v)
|
||||
self.__dict__[k] = l
|
||||
else:
|
||||
self.__dict__[k] = d[k]
|
||||
|
||||
def __getitem__(self, name):
|
||||
if name in self.__dict__:
|
||||
return self.__dict__[name]
|
||||
|
||||
def __iter__(self):
|
||||
return _iter_dict_keys(self.__dict__)
|
||||
|
||||
def __repr__(self):
|
||||
return _pprint.pformat(self.__dict__)
|
749
vendor/netaddr/eui/__init__.py
vendored
Normal file
749
vendor/netaddr/eui/__init__.py
vendored
Normal file
@ -0,0 +1,749 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Classes and functions for dealing with MAC addresses, EUI-48, EUI-64, OUI, IAB
|
||||
identifiers.
|
||||
"""
|
||||
|
||||
from netaddr.core import NotRegisteredError, AddrFormatError, DictDotLookup
|
||||
from netaddr.strategy import eui48 as _eui48, eui64 as _eui64
|
||||
from netaddr.strategy.eui48 import mac_eui48
|
||||
from netaddr.strategy.eui64 import eui64_base
|
||||
from netaddr.ip import IPAddress
|
||||
from netaddr.compat import _importlib_resources, _is_int, _is_str
|
||||
|
||||
|
||||
class BaseIdentifier(object):
|
||||
"""Base class for all IEEE identifiers."""
|
||||
__slots__ = ('_value', '__weakref__')
|
||||
|
||||
def __init__(self):
|
||||
self._value = None
|
||||
|
||||
def __int__(self):
|
||||
""":return: integer value of this identifier"""
|
||||
return self._value
|
||||
|
||||
def __long__(self):
|
||||
""":return: integer value of this identifier"""
|
||||
return self._value
|
||||
|
||||
def __oct__(self):
|
||||
""":return: octal string representation of this identifier."""
|
||||
# Python 2.x only.
|
||||
if self._value == 0:
|
||||
return '0'
|
||||
return '0%o' % self._value
|
||||
|
||||
def __hex__(self):
|
||||
""":return: hexadecimal string representation of this identifier."""
|
||||
# Python 2.x only.
|
||||
return '0x%x' % self._value
|
||||
|
||||
def __index__(self):
|
||||
"""
|
||||
:return: return the integer value of this identifier when passed to
|
||||
hex(), oct() or bin().
|
||||
"""
|
||||
# Python 3.x only.
|
||||
return self._value
|
||||
|
||||
|
||||
class OUI(BaseIdentifier):
|
||||
"""
|
||||
An individual IEEE OUI (Organisationally Unique Identifier).
|
||||
|
||||
For online details see - http://standards.ieee.org/regauth/oui/
|
||||
|
||||
"""
|
||||
__slots__ = ('records',)
|
||||
|
||||
def __init__(self, oui):
|
||||
"""
|
||||
Constructor
|
||||
|
||||
:param oui: an OUI string ``XX-XX-XX`` or an unsigned integer. \
|
||||
Also accepts and parses full MAC/EUI-48 address strings (but not \
|
||||
MAC/EUI-48 integers)!
|
||||
"""
|
||||
super(OUI, self).__init__()
|
||||
|
||||
# Lazy loading of IEEE data structures.
|
||||
from netaddr.eui import ieee
|
||||
|
||||
self.records = []
|
||||
|
||||
if isinstance(oui, str):
|
||||
#TODO: Improve string parsing here.
|
||||
#TODO: Accept full MAC/EUI-48 addressses as well as XX-XX-XX
|
||||
#TODO: and just take /16 (see IAB for details)
|
||||
self._value = int(oui.replace('-', ''), 16)
|
||||
elif _is_int(oui):
|
||||
if 0 <= oui <= 0xffffff:
|
||||
self._value = oui
|
||||
else:
|
||||
raise ValueError('OUI int outside expected range: %r' % (oui,))
|
||||
else:
|
||||
raise TypeError('unexpected OUI format: %r' % (oui,))
|
||||
|
||||
# Discover offsets.
|
||||
if self._value in ieee.OUI_INDEX:
|
||||
fh = _importlib_resources.open_binary(__package__, 'oui.txt')
|
||||
for (offset, size) in ieee.OUI_INDEX[self._value]:
|
||||
fh.seek(offset)
|
||||
data = fh.read(size).decode('UTF-8')
|
||||
self._parse_data(data, offset, size)
|
||||
fh.close()
|
||||
else:
|
||||
raise NotRegisteredError('OUI %r not registered!' % (oui,))
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, OUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return self._value == other._value
|
||||
|
||||
def __ne__(self, other):
|
||||
if not isinstance(other, OUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return self._value != other._value
|
||||
|
||||
def __getstate__(self):
|
||||
""":returns: Pickled state of an `OUI` object."""
|
||||
return self._value, self.records
|
||||
|
||||
def __setstate__(self, state):
|
||||
""":param state: data used to unpickle a pickled `OUI` object."""
|
||||
self._value, self.records = state
|
||||
|
||||
def _parse_data(self, data, offset, size):
|
||||
"""Returns a dict record from raw OUI record data"""
|
||||
record = {
|
||||
'idx': 0,
|
||||
'oui': '',
|
||||
'org': '',
|
||||
'address': [],
|
||||
'offset': offset,
|
||||
'size': size,
|
||||
}
|
||||
|
||||
for line in data.split("\n"):
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
if '(hex)' in line:
|
||||
record['idx'] = self._value
|
||||
record['org'] = line.split(None, 2)[2]
|
||||
record['oui'] = str(self)
|
||||
elif '(base 16)' in line:
|
||||
continue
|
||||
else:
|
||||
record['address'].append(line)
|
||||
|
||||
self.records.append(record)
|
||||
|
||||
@property
|
||||
def reg_count(self):
|
||||
"""Number of registered organisations with this OUI"""
|
||||
return len(self.records)
|
||||
|
||||
def registration(self, index=0):
|
||||
"""
|
||||
The IEEE registration details for this OUI.
|
||||
|
||||
:param index: the index of record (may contain multiple registrations)
|
||||
(Default: 0 - first registration)
|
||||
|
||||
:return: Objectified Python data structure containing registration
|
||||
details.
|
||||
"""
|
||||
return DictDotLookup(self.records[index])
|
||||
|
||||
def __str__(self):
|
||||
""":return: string representation of this OUI"""
|
||||
int_val = self._value
|
||||
return "%02X-%02X-%02X" % (
|
||||
(int_val >> 16) & 0xff,
|
||||
(int_val >> 8) & 0xff,
|
||||
int_val & 0xff)
|
||||
|
||||
def __repr__(self):
|
||||
""":return: executable Python string to recreate equivalent object."""
|
||||
return "OUI('%s')" % self
|
||||
|
||||
|
||||
class IAB(BaseIdentifier):
|
||||
IAB_EUI_VALUES = (0x0050c2, 0x40d855)
|
||||
|
||||
"""
|
||||
An individual IEEE IAB (Individual Address Block) identifier.
|
||||
|
||||
For online details see - http://standards.ieee.org/regauth/oui/
|
||||
|
||||
"""
|
||||
__slots__ = ('record',)
|
||||
|
||||
@classmethod
|
||||
def split_iab_mac(cls, eui_int, strict=False):
|
||||
"""
|
||||
:param eui_int: a MAC IAB as an unsigned integer.
|
||||
|
||||
:param strict: If True, raises a ValueError if the last 12 bits of
|
||||
IAB MAC/EUI-48 address are non-zero, ignores them otherwise.
|
||||
(Default: False)
|
||||
"""
|
||||
if (eui_int >> 12) in cls.IAB_EUI_VALUES:
|
||||
return eui_int, 0
|
||||
|
||||
user_mask = 2 ** 12 - 1
|
||||
iab_mask = (2 ** 48 - 1) ^ user_mask
|
||||
iab_bits = eui_int >> 12
|
||||
user_bits = (eui_int | iab_mask) - iab_mask
|
||||
|
||||
if (iab_bits >> 12) in cls.IAB_EUI_VALUES:
|
||||
if strict and user_bits != 0:
|
||||
raise ValueError('%r is not a strict IAB!' % hex(user_bits))
|
||||
else:
|
||||
raise ValueError('%r is not an IAB address!' % hex(eui_int))
|
||||
|
||||
return iab_bits, user_bits
|
||||
|
||||
def __init__(self, iab, strict=False):
|
||||
"""
|
||||
Constructor
|
||||
|
||||
:param iab: an IAB string ``00-50-C2-XX-X0-00`` or an unsigned \
|
||||
integer. This address looks like an EUI-48 but it should not \
|
||||
have any non-zero bits in the last 3 bytes.
|
||||
|
||||
:param strict: If True, raises a ValueError if the last 12 bits \
|
||||
of IAB MAC/EUI-48 address are non-zero, ignores them otherwise. \
|
||||
(Default: False)
|
||||
"""
|
||||
super(IAB, self).__init__()
|
||||
|
||||
# Lazy loading of IEEE data structures.
|
||||
from netaddr.eui import ieee
|
||||
|
||||
self.record = {
|
||||
'idx': 0,
|
||||
'iab': '',
|
||||
'org': '',
|
||||
'address': [],
|
||||
'offset': 0,
|
||||
'size': 0,
|
||||
}
|
||||
|
||||
if isinstance(iab, str):
|
||||
#TODO: Improve string parsing here.
|
||||
#TODO: '00-50-C2' is actually invalid.
|
||||
#TODO: Should be '00-50-C2-00-00-00' (i.e. a full MAC/EUI-48)
|
||||
int_val = int(iab.replace('-', ''), 16)
|
||||
iab_int, user_int = self.split_iab_mac(int_val, strict=strict)
|
||||
self._value = iab_int
|
||||
elif _is_int(iab):
|
||||
iab_int, user_int = self.split_iab_mac(iab, strict=strict)
|
||||
self._value = iab_int
|
||||
else:
|
||||
raise TypeError('unexpected IAB format: %r!' % (iab,))
|
||||
|
||||
# Discover offsets.
|
||||
if self._value in ieee.IAB_INDEX:
|
||||
fh = _importlib_resources.open_binary(__package__, 'iab.txt')
|
||||
(offset, size) = ieee.IAB_INDEX[self._value][0]
|
||||
self.record['offset'] = offset
|
||||
self.record['size'] = size
|
||||
fh.seek(offset)
|
||||
data = fh.read(size).decode('UTF-8')
|
||||
self._parse_data(data, offset, size)
|
||||
fh.close()
|
||||
else:
|
||||
raise NotRegisteredError('IAB %r not unregistered!' % (iab,))
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, IAB):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return self._value == other._value
|
||||
|
||||
def __ne__(self, other):
|
||||
if not isinstance(other, IAB):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return self._value != other._value
|
||||
|
||||
def __getstate__(self):
|
||||
""":returns: Pickled state of an `IAB` object."""
|
||||
return self._value, self.record
|
||||
|
||||
def __setstate__(self, state):
|
||||
""":param state: data used to unpickle a pickled `IAB` object."""
|
||||
self._value, self.record = state
|
||||
|
||||
def _parse_data(self, data, offset, size):
|
||||
"""Returns a dict record from raw IAB record data"""
|
||||
for line in data.split("\n"):
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
if '(hex)' in line:
|
||||
self.record['idx'] = self._value
|
||||
self.record['org'] = line.split(None, 2)[2]
|
||||
self.record['iab'] = str(self)
|
||||
elif '(base 16)' in line:
|
||||
continue
|
||||
else:
|
||||
self.record['address'].append(line)
|
||||
|
||||
def registration(self):
|
||||
"""The IEEE registration details for this IAB"""
|
||||
return DictDotLookup(self.record)
|
||||
|
||||
def __str__(self):
|
||||
""":return: string representation of this IAB"""
|
||||
int_val = self._value << 4
|
||||
|
||||
return "%02X-%02X-%02X-%02X-%02X-00" % (
|
||||
(int_val >> 32) & 0xff,
|
||||
(int_val >> 24) & 0xff,
|
||||
(int_val >> 16) & 0xff,
|
||||
(int_val >> 8) & 0xff,
|
||||
int_val & 0xff)
|
||||
|
||||
def __repr__(self):
|
||||
""":return: executable Python string to recreate equivalent object."""
|
||||
return "IAB('%s')" % self
|
||||
|
||||
|
||||
class EUI(BaseIdentifier):
|
||||
"""
|
||||
An IEEE EUI (Extended Unique Identifier).
|
||||
|
||||
Both EUI-48 (used for layer 2 MAC addresses) and EUI-64 are supported.
|
||||
|
||||
Input parsing for EUI-48 addresses is flexible, supporting many MAC
|
||||
variants.
|
||||
|
||||
"""
|
||||
__slots__ = ('_module', '_dialect')
|
||||
|
||||
def __init__(self, addr, version=None, dialect=None):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param addr: an EUI-48 (MAC) or EUI-64 address in string format or \
|
||||
an unsigned integer. May also be another EUI object (copy \
|
||||
construction).
|
||||
|
||||
:param version: (optional) the explicit EUI address version, either \
|
||||
48 or 64. Mainly used to distinguish EUI-48 and EUI-64 identifiers \
|
||||
specified as integers which may be numerically equivalent.
|
||||
|
||||
:param dialect: (optional) the mac_* dialect to be used to configure \
|
||||
the formatting of EUI-48 (MAC) addresses.
|
||||
"""
|
||||
super(EUI, self).__init__()
|
||||
|
||||
self._module = None
|
||||
|
||||
if isinstance(addr, EUI):
|
||||
# Copy constructor.
|
||||
if version is not None and version != addr._module.version:
|
||||
raise ValueError('cannot switch EUI versions using '
|
||||
'copy constructor!')
|
||||
self._module = addr._module
|
||||
self._value = addr._value
|
||||
self.dialect = addr.dialect
|
||||
return
|
||||
|
||||
if version is not None:
|
||||
if version == 48:
|
||||
self._module = _eui48
|
||||
elif version == 64:
|
||||
self._module = _eui64
|
||||
else:
|
||||
raise ValueError('unsupported EUI version %r' % version)
|
||||
else:
|
||||
# Choose a default version when addr is an integer and version is
|
||||
# not specified.
|
||||
if _is_int(addr):
|
||||
if 0 <= addr <= 0xffffffffffff:
|
||||
self._module = _eui48
|
||||
elif 0xffffffffffff < addr <= 0xffffffffffffffff:
|
||||
self._module = _eui64
|
||||
|
||||
self.value = addr
|
||||
|
||||
# Choose a dialect for MAC formatting.
|
||||
self.dialect = dialect
|
||||
|
||||
def __getstate__(self):
|
||||
""":returns: Pickled state of an `EUI` object."""
|
||||
return self._value, self._module.version, self.dialect
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""
|
||||
:param state: data used to unpickle a pickled `EUI` object.
|
||||
|
||||
"""
|
||||
value, version, dialect = state
|
||||
|
||||
self._value = value
|
||||
|
||||
if version == 48:
|
||||
self._module = _eui48
|
||||
elif version == 64:
|
||||
self._module = _eui64
|
||||
else:
|
||||
raise ValueError('unpickling failed for object state: %s' \
|
||||
% (state,))
|
||||
|
||||
self.dialect = dialect
|
||||
|
||||
def _get_value(self):
|
||||
return self._value
|
||||
|
||||
def _set_value(self, value):
|
||||
if self._module is None:
|
||||
# EUI version is implicit, detect it from value.
|
||||
for module in (_eui48, _eui64):
|
||||
try:
|
||||
self._value = module.str_to_int(value)
|
||||
self._module = module
|
||||
break
|
||||
except AddrFormatError:
|
||||
try:
|
||||
if 0 <= int(value) <= module.max_int:
|
||||
self._value = int(value)
|
||||
self._module = module
|
||||
break
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
if self._module is None:
|
||||
raise AddrFormatError('failed to detect EUI version: %r'
|
||||
% (value,))
|
||||
else:
|
||||
# EUI version is explicit.
|
||||
if _is_str(value):
|
||||
try:
|
||||
self._value = self._module.str_to_int(value)
|
||||
except AddrFormatError:
|
||||
raise AddrFormatError('address %r is not an EUIv%d'
|
||||
% (value, self._module.version))
|
||||
else:
|
||||
if 0 <= int(value) <= self._module.max_int:
|
||||
self._value = int(value)
|
||||
else:
|
||||
raise AddrFormatError('bad address format: %r' % (value,))
|
||||
|
||||
value = property(_get_value, _set_value, None,
|
||||
'a positive integer representing the value of this EUI indentifier.')
|
||||
|
||||
def _get_dialect(self):
|
||||
return self._dialect
|
||||
|
||||
def _validate_dialect(self, value):
|
||||
if value is None:
|
||||
if self._module is _eui64:
|
||||
return eui64_base
|
||||
else:
|
||||
return mac_eui48
|
||||
else:
|
||||
if hasattr(value, 'word_size') and hasattr(value, 'word_fmt'):
|
||||
return value
|
||||
else:
|
||||
raise TypeError('custom dialects should subclass mac_eui48!')
|
||||
|
||||
def _set_dialect(self, value):
|
||||
self._dialect = self._validate_dialect(value)
|
||||
|
||||
dialect = property(_get_dialect, _set_dialect, None,
|
||||
"a Python class providing support for the interpretation of "
|
||||
"various MAC\n address formats.")
|
||||
|
||||
@property
|
||||
def oui(self):
|
||||
"""The OUI (Organisationally Unique Identifier) for this EUI."""
|
||||
if self._module == _eui48:
|
||||
return OUI(self.value >> 24)
|
||||
elif self._module == _eui64:
|
||||
return OUI(self.value >> 40)
|
||||
|
||||
@property
|
||||
def ei(self):
|
||||
"""The EI (Extension Identifier) for this EUI"""
|
||||
if self._module == _eui48:
|
||||
return '%02X-%02X-%02X' % tuple(self[3:6])
|
||||
elif self._module == _eui64:
|
||||
return '%02X-%02X-%02X-%02X-%02X' % tuple(self[3:8])
|
||||
|
||||
def is_iab(self):
|
||||
""":return: True if this EUI is an IAB address, False otherwise"""
|
||||
return (self._value >> 24) in IAB.IAB_EUI_VALUES
|
||||
|
||||
@property
|
||||
def iab(self):
|
||||
"""
|
||||
If is_iab() is True, the IAB (Individual Address Block) is returned,
|
||||
``None`` otherwise.
|
||||
"""
|
||||
if self.is_iab():
|
||||
return IAB(self._value >> 12)
|
||||
|
||||
@property
|
||||
def version(self):
|
||||
"""The EUI version represented by this EUI object."""
|
||||
return self._module.version
|
||||
|
||||
def __getitem__(self, idx):
|
||||
"""
|
||||
:return: The integer value of the word referenced by index (both \
|
||||
positive and negative). Raises ``IndexError`` if index is out \
|
||||
of bounds. Also supports Python list slices for accessing \
|
||||
word groups.
|
||||
"""
|
||||
if _is_int(idx):
|
||||
# Indexing, including negative indexing goodness.
|
||||
num_words = self._dialect.num_words
|
||||
if not (-num_words) <= idx <= (num_words - 1):
|
||||
raise IndexError('index out range for address type!')
|
||||
return self._module.int_to_words(self._value, self._dialect)[idx]
|
||||
elif isinstance(idx, slice):
|
||||
words = self._module.int_to_words(self._value, self._dialect)
|
||||
return [words[i] for i in range(*idx.indices(len(words)))]
|
||||
else:
|
||||
raise TypeError('unsupported type %r!' % (idx,))
|
||||
|
||||
def __setitem__(self, idx, value):
|
||||
"""Set the value of the word referenced by index in this address"""
|
||||
if isinstance(idx, slice):
|
||||
# TODO - settable slices.
|
||||
raise NotImplementedError('settable slices are not supported!')
|
||||
|
||||
if not _is_int(idx):
|
||||
raise TypeError('index not an integer!')
|
||||
|
||||
if not 0 <= idx <= (self._dialect.num_words - 1):
|
||||
raise IndexError('index %d outside address type boundary!' % (idx,))
|
||||
|
||||
if not _is_int(value):
|
||||
raise TypeError('value not an integer!')
|
||||
|
||||
if not 0 <= value <= self._dialect.max_word:
|
||||
raise IndexError('value %d outside word size maximum of %d bits!'
|
||||
% (value, self._dialect.word_size))
|
||||
|
||||
words = list(self._module.int_to_words(self._value, self._dialect))
|
||||
words[idx] = value
|
||||
self._value = self._module.words_to_int(words)
|
||||
|
||||
def __hash__(self):
|
||||
""":return: hash of this EUI object suitable for dict keys, sets etc"""
|
||||
return hash((self.version, self._value))
|
||||
|
||||
def __eq__(self, other):
|
||||
"""
|
||||
:return: ``True`` if this EUI object is numerically the same as other, \
|
||||
``False`` otherwise.
|
||||
"""
|
||||
if not isinstance(other, EUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return (self.version, self._value) == (other.version, other._value)
|
||||
|
||||
def __ne__(self, other):
|
||||
"""
|
||||
:return: ``True`` if this EUI object is numerically the same as other, \
|
||||
``False`` otherwise.
|
||||
"""
|
||||
if not isinstance(other, EUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return (self.version, self._value) != (other.version, other._value)
|
||||
|
||||
def __lt__(self, other):
|
||||
"""
|
||||
:return: ``True`` if this EUI object is numerically lower in value than \
|
||||
other, ``False`` otherwise.
|
||||
"""
|
||||
if not isinstance(other, EUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return (self.version, self._value) < (other.version, other._value)
|
||||
|
||||
def __le__(self, other):
|
||||
"""
|
||||
:return: ``True`` if this EUI object is numerically lower or equal in \
|
||||
value to other, ``False`` otherwise.
|
||||
"""
|
||||
if not isinstance(other, EUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return (self.version, self._value) <= (other.version, other._value)
|
||||
|
||||
def __gt__(self, other):
|
||||
"""
|
||||
:return: ``True`` if this EUI object is numerically greater in value \
|
||||
than other, ``False`` otherwise.
|
||||
"""
|
||||
if not isinstance(other, EUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return (self.version, self._value) > (other.version, other._value)
|
||||
|
||||
def __ge__(self, other):
|
||||
"""
|
||||
:return: ``True`` if this EUI object is numerically greater or equal \
|
||||
in value to other, ``False`` otherwise.
|
||||
"""
|
||||
if not isinstance(other, EUI):
|
||||
try:
|
||||
other = self.__class__(other)
|
||||
except Exception:
|
||||
return NotImplemented
|
||||
return (self.version, self._value) >= (other.version, other._value)
|
||||
|
||||
def bits(self, word_sep=None):
|
||||
"""
|
||||
:param word_sep: (optional) the separator to insert between words. \
|
||||
Default: None - use default separator for address type.
|
||||
|
||||
:return: human-readable binary digit string of this address.
|
||||
"""
|
||||
return self._module.int_to_bits(self._value, word_sep)
|
||||
|
||||
@property
|
||||
def packed(self):
|
||||
"""The value of this EUI address as a packed binary string."""
|
||||
return self._module.int_to_packed(self._value)
|
||||
|
||||
@property
|
||||
def words(self):
|
||||
"""A list of unsigned integer octets found in this EUI address."""
|
||||
return self._module.int_to_words(self._value)
|
||||
|
||||
@property
|
||||
def bin(self):
|
||||
"""
|
||||
The value of this EUI adddress in standard Python binary
|
||||
representational form (0bxxx). A back port of the format provided by
|
||||
the builtin bin() function found in Python 2.6.x and higher.
|
||||
"""
|
||||
return self._module.int_to_bin(self._value)
|
||||
|
||||
def eui64(self):
|
||||
"""
|
||||
- If this object represents an EUI-48 it is converted to EUI-64 \
|
||||
as per the standard.
|
||||
- If this object is already an EUI-64, a new, numerically \
|
||||
equivalent object is returned instead.
|
||||
|
||||
:return: The value of this EUI object as a new 64-bit EUI object.
|
||||
"""
|
||||
if self.version == 48:
|
||||
# Convert 11:22:33:44:55:66 into 11:22:33:FF:FE:44:55:66.
|
||||
first_three = self._value >> 24
|
||||
last_three = self._value & 0xffffff
|
||||
new_value = (first_three << 40) | 0xfffe000000 | last_three
|
||||
else:
|
||||
# is already a EUI64
|
||||
new_value = self._value
|
||||
return self.__class__(new_value, version=64)
|
||||
|
||||
def modified_eui64(self):
|
||||
"""
|
||||
- create a new EUI object with a modified EUI-64 as described in RFC 4291 section 2.5.1
|
||||
|
||||
:return: a new and modified 64-bit EUI object.
|
||||
"""
|
||||
# Modified EUI-64 format interface identifiers are formed by inverting
|
||||
# the "u" bit (universal/local bit in IEEE EUI-64 terminology) when
|
||||
# forming the interface identifier from IEEE EUI-64 identifiers. In
|
||||
# the resulting Modified EUI-64 format, the "u" bit is set to one (1)
|
||||
# to indicate universal scope, and it is set to zero (0) to indicate
|
||||
# local scope.
|
||||
eui64 = self.eui64()
|
||||
eui64._value ^= 0x00000000000000000200000000000000
|
||||
return eui64
|
||||
|
||||
def ipv6(self, prefix):
|
||||
"""
|
||||
.. note:: This poses security risks in certain scenarios. \
|
||||
Please read RFC 4941 for details. Reference: RFCs 4291 and 4941.
|
||||
|
||||
:param prefix: ipv6 prefix
|
||||
|
||||
:return: new IPv6 `IPAddress` object based on this `EUI` \
|
||||
using the technique described in RFC 4291.
|
||||
"""
|
||||
int_val = int(prefix) + int(self.modified_eui64())
|
||||
return IPAddress(int_val, version=6)
|
||||
|
||||
def ipv6_link_local(self):
|
||||
"""
|
||||
.. note:: This poses security risks in certain scenarios. \
|
||||
Please read RFC 4941 for details. Reference: RFCs 4291 and 4941.
|
||||
|
||||
:return: new link local IPv6 `IPAddress` object based on this `EUI` \
|
||||
using the technique described in RFC 4291.
|
||||
"""
|
||||
return self.ipv6(0xfe800000000000000000000000000000)
|
||||
|
||||
@property
|
||||
def info(self):
|
||||
"""
|
||||
A record dict containing IEEE registration details for this EUI
|
||||
(MAC-48) if available, None otherwise.
|
||||
"""
|
||||
data = {'OUI': self.oui.registration()}
|
||||
if self.is_iab():
|
||||
data['IAB'] = self.iab.registration()
|
||||
|
||||
return DictDotLookup(data)
|
||||
|
||||
def format(self, dialect=None):
|
||||
"""
|
||||
Format the EUI into the representational format according to the given
|
||||
dialect
|
||||
|
||||
:param dialect: the mac_* dialect defining the formatting of EUI-48 \
|
||||
(MAC) addresses.
|
||||
|
||||
:return: EUI in representational format according to the given dialect
|
||||
"""
|
||||
validated_dialect = self._validate_dialect(dialect)
|
||||
return self._module.int_to_str(self._value, validated_dialect)
|
||||
|
||||
def __str__(self):
|
||||
""":return: EUI in representational format"""
|
||||
return self._module.int_to_str(self._value, self._dialect)
|
||||
|
||||
def __repr__(self):
|
||||
""":return: executable Python string to recreate equivalent object."""
|
||||
return "EUI('%s')" % self
|
||||
|
4575
vendor/netaddr/eui/iab.idx
vendored
Normal file
4575
vendor/netaddr/eui/iab.idx
vendored
Normal file
File diff suppressed because it is too large
Load Diff
27381
vendor/netaddr/eui/iab.txt
vendored
Normal file
27381
vendor/netaddr/eui/iab.txt
vendored
Normal file
File diff suppressed because it is too large
Load Diff
293
vendor/netaddr/eui/ieee.py
vendored
Normal file
293
vendor/netaddr/eui/ieee.py
vendored
Normal file
@ -0,0 +1,293 @@
|
||||
#!/usr/bin/env python
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
#
|
||||
# DISCLAIMER
|
||||
#
|
||||
# netaddr is not sponsored nor endorsed by the IEEE.
|
||||
#
|
||||
# Use of data from the IEEE (Institute of Electrical and Electronics
|
||||
# Engineers) is subject to copyright. See the following URL for
|
||||
# details :-
|
||||
#
|
||||
# - http://www.ieee.org/web/publications/rights/legal.html
|
||||
#
|
||||
# IEEE data files included with netaddr are not modified in any way but are
|
||||
# parsed and made available to end users through an API. There is no
|
||||
# guarantee that referenced files are not out of date.
|
||||
#
|
||||
# See README file and source code for URLs to latest copies of the relevant
|
||||
# files.
|
||||
#
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Provides access to public OUI and IAB registration data published by the IEEE.
|
||||
|
||||
More details can be found at the following URLs :-
|
||||
|
||||
- IEEE Home Page - http://www.ieee.org/
|
||||
- Registration Authority Home Page - http://standards.ieee.org/regauth/
|
||||
"""
|
||||
|
||||
import os.path as _path
|
||||
import csv as _csv
|
||||
|
||||
from netaddr.compat import _bytes_type, _importlib_resources
|
||||
from netaddr.core import Subscriber, Publisher
|
||||
|
||||
|
||||
#: OUI index lookup dictionary.
|
||||
OUI_INDEX = {}
|
||||
|
||||
#: IAB index lookup dictionary.
|
||||
IAB_INDEX = {}
|
||||
|
||||
|
||||
class FileIndexer(Subscriber):
|
||||
"""
|
||||
A concrete Subscriber that receives OUI record offset information that is
|
||||
written to an index data file as a set of comma separated records.
|
||||
"""
|
||||
def __init__(self, index_file):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param index_file: a file-like object or name of index file where
|
||||
index records will be written.
|
||||
"""
|
||||
if hasattr(index_file, 'readline') and hasattr(index_file, 'tell'):
|
||||
self.fh = index_file
|
||||
else:
|
||||
self.fh = open(index_file, 'w')
|
||||
|
||||
self.writer = _csv.writer(self.fh, lineterminator="\n")
|
||||
|
||||
def update(self, data):
|
||||
"""
|
||||
Receives and writes index data to a CSV data file.
|
||||
|
||||
:param data: record containing offset record information.
|
||||
"""
|
||||
self.writer.writerow(data)
|
||||
|
||||
|
||||
class OUIIndexParser(Publisher):
|
||||
"""
|
||||
A concrete Publisher that parses OUI (Organisationally Unique Identifier)
|
||||
records from IEEE text-based registration files
|
||||
|
||||
It notifies registered Subscribers as each record is encountered, passing
|
||||
on the record's position relative to the start of the file (offset) and
|
||||
the size of the record (in bytes).
|
||||
|
||||
The file processed by this parser is available online from this URL :-
|
||||
|
||||
- http://standards.ieee.org/regauth/oui/oui.txt
|
||||
|
||||
This is a sample of the record structure expected::
|
||||
|
||||
00-CA-FE (hex) ACME CORPORATION
|
||||
00CAFE (base 16) ACME CORPORATION
|
||||
1 MAIN STREET
|
||||
SPRINGFIELD
|
||||
UNITED STATES
|
||||
"""
|
||||
def __init__(self, ieee_file):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param ieee_file: a file-like object or name of file containing OUI
|
||||
records. When using a file-like object always open it in binary
|
||||
mode otherwise offsets will probably misbehave.
|
||||
"""
|
||||
super(OUIIndexParser, self).__init__()
|
||||
|
||||
if hasattr(ieee_file, 'readline') and hasattr(ieee_file, 'tell'):
|
||||
self.fh = ieee_file
|
||||
else:
|
||||
self.fh = open(ieee_file, 'rb')
|
||||
|
||||
def parse(self):
|
||||
"""
|
||||
Starts the parsing process which detects records and notifies
|
||||
registered subscribers as it finds each OUI record.
|
||||
"""
|
||||
skip_header = True
|
||||
record = None
|
||||
size = 0
|
||||
|
||||
marker = _bytes_type('(hex)')
|
||||
hyphen = _bytes_type('-')
|
||||
empty_string = _bytes_type('')
|
||||
|
||||
while True:
|
||||
line = self.fh.readline()
|
||||
|
||||
if not line:
|
||||
break # EOF, we're done
|
||||
|
||||
if skip_header and marker in line:
|
||||
skip_header = False
|
||||
|
||||
if skip_header:
|
||||
# ignoring header section
|
||||
continue
|
||||
|
||||
if marker in line:
|
||||
# record start
|
||||
if record is not None:
|
||||
# a complete record.
|
||||
record.append(size)
|
||||
self.notify(record)
|
||||
|
||||
size = len(line)
|
||||
offset = (self.fh.tell() - len(line))
|
||||
oui = line.split()[0]
|
||||
index = int(oui.replace(hyphen, empty_string), 16)
|
||||
record = [index, offset]
|
||||
else:
|
||||
# within record
|
||||
size += len(line)
|
||||
|
||||
# process final record on loop exit
|
||||
record.append(size)
|
||||
self.notify(record)
|
||||
|
||||
|
||||
class IABIndexParser(Publisher):
|
||||
"""
|
||||
A concrete Publisher that parses IAB (Individual Address Block) records
|
||||
from IEEE text-based registration files
|
||||
|
||||
It notifies registered Subscribers as each record is encountered, passing
|
||||
on the record's position relative to the start of the file (offset) and
|
||||
the size of the record (in bytes).
|
||||
|
||||
The file processed by this parser is available online from this URL :-
|
||||
|
||||
- http://standards.ieee.org/regauth/oui/iab.txt
|
||||
|
||||
This is a sample of the record structure expected::
|
||||
|
||||
00-50-C2 (hex) ACME CORPORATION
|
||||
ABC000-ABCFFF (base 16) ACME CORPORATION
|
||||
1 MAIN STREET
|
||||
SPRINGFIELD
|
||||
UNITED STATES
|
||||
"""
|
||||
def __init__(self, ieee_file):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param ieee_file: a file-like object or name of file containing IAB
|
||||
records. When using a file-like object always open it in binary
|
||||
mode otherwise offsets will probably misbehave.
|
||||
"""
|
||||
super(IABIndexParser, self).__init__()
|
||||
|
||||
if hasattr(ieee_file, 'readline') and hasattr(ieee_file, 'tell'):
|
||||
self.fh = ieee_file
|
||||
else:
|
||||
self.fh = open(ieee_file, 'rb')
|
||||
|
||||
def parse(self):
|
||||
"""
|
||||
Starts the parsing process which detects records and notifies
|
||||
registered subscribers as it finds each IAB record.
|
||||
"""
|
||||
skip_header = True
|
||||
record = None
|
||||
size = 0
|
||||
|
||||
hex_marker = _bytes_type('(hex)')
|
||||
base16_marker = _bytes_type('(base 16)')
|
||||
hyphen = _bytes_type('-')
|
||||
empty_string = _bytes_type('')
|
||||
|
||||
while True:
|
||||
line = self.fh.readline()
|
||||
|
||||
if not line:
|
||||
break # EOF, we're done
|
||||
|
||||
if skip_header and hex_marker in line:
|
||||
skip_header = False
|
||||
|
||||
if skip_header:
|
||||
# ignoring header section
|
||||
continue
|
||||
|
||||
if hex_marker in line:
|
||||
# record start
|
||||
if record is not None:
|
||||
record.append(size)
|
||||
self.notify(record)
|
||||
|
||||
offset = (self.fh.tell() - len(line))
|
||||
iab_prefix = line.split()[0]
|
||||
index = iab_prefix
|
||||
record = [index, offset]
|
||||
size = len(line)
|
||||
elif base16_marker in line:
|
||||
# within record
|
||||
size += len(line)
|
||||
prefix = record[0].replace(hyphen, empty_string)
|
||||
suffix = line.split()[0]
|
||||
suffix = suffix.split(hyphen)[0]
|
||||
record[0] = (int(prefix + suffix, 16)) >> 12
|
||||
else:
|
||||
# within record
|
||||
size += len(line)
|
||||
|
||||
# process final record on loop exit
|
||||
record.append(size)
|
||||
self.notify(record)
|
||||
|
||||
|
||||
def create_index_from_registry(registry_fh, index_path, parser):
|
||||
"""Generate an index files from the IEEE registry file."""
|
||||
oui_parser = parser(registry_fh)
|
||||
oui_parser.attach(FileIndexer(index_path))
|
||||
oui_parser.parse()
|
||||
|
||||
|
||||
def create_indices():
|
||||
"""Create indices for OUI and IAB file based lookups"""
|
||||
create_index_from_registry(
|
||||
_path.join(_path.dirname(__file__), 'oui.txt'),
|
||||
_path.join(_path.dirname(__file__), 'oui.idx'),
|
||||
OUIIndexParser,
|
||||
)
|
||||
create_index_from_registry(
|
||||
_path.join(_path.dirname(__file__), 'iab.txt'),
|
||||
_path.join(_path.dirname(__file__), 'iab.idx'),
|
||||
IABIndexParser,
|
||||
)
|
||||
|
||||
|
||||
def load_index(index, fp):
|
||||
"""Load index from file into index data structure."""
|
||||
try:
|
||||
for row in _csv.reader([x.decode('UTF-8') for x in fp]):
|
||||
(key, offset, size) = [int(_) for _ in row]
|
||||
index.setdefault(key, [])
|
||||
index[key].append((offset, size))
|
||||
finally:
|
||||
fp.close()
|
||||
|
||||
|
||||
def load_indices():
|
||||
"""Load OUI and IAB lookup indices into memory"""
|
||||
load_index(OUI_INDEX, _importlib_resources.open_binary(__package__, 'oui.idx'))
|
||||
load_index(IAB_INDEX, _importlib_resources.open_binary(__package__, 'iab.idx'))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Generate indices when module is executed as a script.
|
||||
create_indices()
|
||||
else:
|
||||
# On module load read indices in memory to enable lookups.
|
||||
load_indices()
|
28112
vendor/netaddr/eui/oui.idx
vendored
Normal file
28112
vendor/netaddr/eui/oui.idx
vendored
Normal file
File diff suppressed because it is too large
Load Diff
168378
vendor/netaddr/eui/oui.txt
vendored
Normal file
168378
vendor/netaddr/eui/oui.txt
vendored
Normal file
File diff suppressed because it is too large
Load Diff
246
vendor/netaddr/fbsocket.py
vendored
Normal file
246
vendor/netaddr/fbsocket.py
vendored
Normal file
@ -0,0 +1,246 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""Fallback routines for Python's standard library socket module"""
|
||||
|
||||
from struct import unpack as _unpack, pack as _pack
|
||||
|
||||
from netaddr.compat import _bytes_join, _is_str
|
||||
|
||||
AF_INET = 2
|
||||
AF_INET6 = 10
|
||||
|
||||
|
||||
def inet_ntoa(packed_ip):
|
||||
"""
|
||||
Convert an IP address from 32-bit packed binary format to string format.
|
||||
"""
|
||||
if not _is_str(packed_ip):
|
||||
raise TypeError('string type expected, not %s' % type(packed_ip))
|
||||
|
||||
if len(packed_ip) != 4:
|
||||
raise ValueError('invalid length of packed IP address string')
|
||||
|
||||
return '%d.%d.%d.%d' % _unpack('4B', packed_ip)
|
||||
|
||||
|
||||
def _compact_ipv6_tokens(tokens):
|
||||
new_tokens = []
|
||||
|
||||
positions = []
|
||||
start_index = None
|
||||
num_tokens = 0
|
||||
|
||||
# Discover all runs of zeros.
|
||||
for idx, token in enumerate(tokens):
|
||||
if token == '0':
|
||||
if start_index is None:
|
||||
start_index = idx
|
||||
num_tokens += 1
|
||||
else:
|
||||
if num_tokens > 1:
|
||||
positions.append((num_tokens, start_index))
|
||||
start_index = None
|
||||
num_tokens = 0
|
||||
|
||||
new_tokens.append(token)
|
||||
|
||||
# Store any position not saved before loop exit.
|
||||
if num_tokens > 1:
|
||||
positions.append((num_tokens, start_index))
|
||||
|
||||
# Replace first longest run with an empty string.
|
||||
if len(positions) != 0:
|
||||
# Locate longest, left-most run of zeros.
|
||||
positions.sort(key=lambda x: x[1])
|
||||
best_position = positions[0]
|
||||
for position in positions:
|
||||
if position[0] > best_position[0]:
|
||||
best_position = position
|
||||
# Replace chosen zero run.
|
||||
(length, start_idx) = best_position
|
||||
new_tokens = new_tokens[0:start_idx] + [''] + new_tokens[start_idx + length:]
|
||||
|
||||
# Add start and end blanks so join creates '::'.
|
||||
if new_tokens[0] == '':
|
||||
new_tokens.insert(0, '')
|
||||
|
||||
if new_tokens[-1] == '':
|
||||
new_tokens.append('')
|
||||
|
||||
return new_tokens
|
||||
|
||||
|
||||
def inet_ntop(af, packed_ip):
|
||||
"""Convert an packed IP address of the given family to string format."""
|
||||
if af == AF_INET:
|
||||
# IPv4.
|
||||
return inet_ntoa(packed_ip)
|
||||
elif af == AF_INET6:
|
||||
# IPv6.
|
||||
if len(packed_ip) != 16 or not _is_str(packed_ip):
|
||||
raise ValueError('invalid length of packed IP address string')
|
||||
|
||||
tokens = ['%x' % i for i in _unpack('>8H', packed_ip)]
|
||||
|
||||
# Convert packed address to an integer value.
|
||||
words = list(_unpack('>8H', packed_ip))
|
||||
int_val = 0
|
||||
for i, num in enumerate(reversed(words)):
|
||||
word = num
|
||||
word = word << 16 * i
|
||||
int_val = int_val | word
|
||||
|
||||
if 0xffff < int_val <= 0xffffffff or int_val >> 32 == 0xffff:
|
||||
# IPv4 compatible / mapped IPv6.
|
||||
packed_ipv4 = _pack('>2H', *[int(i, 16) for i in tokens[-2:]])
|
||||
ipv4_str = inet_ntoa(packed_ipv4)
|
||||
tokens = tokens[0:-2] + [ipv4_str]
|
||||
|
||||
return ':'.join(_compact_ipv6_tokens(tokens))
|
||||
else:
|
||||
raise ValueError('unknown address family %d' % af)
|
||||
|
||||
|
||||
def _inet_pton_af_inet(ip_string):
|
||||
"""
|
||||
Convert an IP address in string format (123.45.67.89) to the 32-bit packed
|
||||
binary format used in low-level network functions. Differs from inet_aton
|
||||
by only support decimal octets. Using octal or hexadecimal values will
|
||||
raise a ValueError exception.
|
||||
"""
|
||||
#TODO: optimise this ... use inet_aton with mods if available ...
|
||||
if _is_str(ip_string):
|
||||
invalid_addr = ValueError('illegal IP address string %r' % ip_string)
|
||||
# Support for hexadecimal and octal octets.
|
||||
tokens = ip_string.split('.')
|
||||
|
||||
# Pack octets.
|
||||
if len(tokens) == 4:
|
||||
words = []
|
||||
for token in tokens:
|
||||
if token.startswith('0x') or (token.startswith('0') and len(token) > 1):
|
||||
raise invalid_addr
|
||||
try:
|
||||
octet = int(token)
|
||||
except ValueError:
|
||||
raise invalid_addr
|
||||
|
||||
if (octet >> 8) != 0:
|
||||
raise invalid_addr
|
||||
words.append(_pack('B', octet))
|
||||
return _bytes_join(words)
|
||||
else:
|
||||
raise invalid_addr
|
||||
|
||||
raise ValueError('argument should be a string, not %s' % type(ip_string))
|
||||
|
||||
|
||||
def inet_pton(af, ip_string):
|
||||
"""
|
||||
Convert an IP address from string format to a packed string suitable for
|
||||
use with low-level network functions.
|
||||
"""
|
||||
if af == AF_INET:
|
||||
# IPv4.
|
||||
return _inet_pton_af_inet(ip_string)
|
||||
elif af == AF_INET6:
|
||||
invalid_addr = ValueError('illegal IP address string %r' % ip_string)
|
||||
# IPv6.
|
||||
values = []
|
||||
|
||||
if not _is_str(ip_string):
|
||||
raise invalid_addr
|
||||
|
||||
if 'x' in ip_string:
|
||||
# Don't accept hextets with the 0x prefix.
|
||||
raise invalid_addr
|
||||
|
||||
if '::' in ip_string:
|
||||
if ip_string == '::':
|
||||
# Unspecified address.
|
||||
return '\x00'.encode() * 16
|
||||
# IPv6 compact mode.
|
||||
try:
|
||||
prefix, suffix = ip_string.split('::')
|
||||
except ValueError:
|
||||
raise invalid_addr
|
||||
|
||||
l_prefix = []
|
||||
l_suffix = []
|
||||
|
||||
if prefix != '':
|
||||
l_prefix = prefix.split(':')
|
||||
|
||||
if suffix != '':
|
||||
l_suffix = suffix.split(':')
|
||||
|
||||
# IPv6 compact IPv4 compatibility mode.
|
||||
if len(l_suffix) and '.' in l_suffix[-1]:
|
||||
ipv4_str = _inet_pton_af_inet(l_suffix.pop())
|
||||
l_suffix.append('%x' % _unpack('>H', ipv4_str[0:2])[0])
|
||||
l_suffix.append('%x' % _unpack('>H', ipv4_str[2:4])[0])
|
||||
|
||||
token_count = len(l_prefix) + len(l_suffix)
|
||||
|
||||
if not 0 <= token_count <= 8 - 1:
|
||||
raise invalid_addr
|
||||
|
||||
gap_size = 8 - ( len(l_prefix) + len(l_suffix) )
|
||||
|
||||
values = (
|
||||
[_pack('>H', int(i, 16)) for i in l_prefix] +
|
||||
['\x00\x00'.encode() for i in range(gap_size)] +
|
||||
[_pack('>H', int(i, 16)) for i in l_suffix]
|
||||
)
|
||||
try:
|
||||
for token in l_prefix + l_suffix:
|
||||
word = int(token, 16)
|
||||
if not 0 <= word <= 0xffff:
|
||||
raise invalid_addr
|
||||
except ValueError:
|
||||
raise invalid_addr
|
||||
else:
|
||||
# IPv6 verbose mode.
|
||||
if ':' in ip_string:
|
||||
tokens = ip_string.split(':')
|
||||
|
||||
if '.' in ip_string:
|
||||
ipv6_prefix = tokens[:-1]
|
||||
if ipv6_prefix[:-1] != ['0', '0', '0', '0', '0']:
|
||||
raise invalid_addr
|
||||
|
||||
if ipv6_prefix[-1].lower() not in ('0', 'ffff'):
|
||||
raise invalid_addr
|
||||
|
||||
# IPv6 verbose IPv4 compatibility mode.
|
||||
if len(tokens) != 7:
|
||||
raise invalid_addr
|
||||
|
||||
ipv4_str = _inet_pton_af_inet(tokens.pop())
|
||||
tokens.append('%x' % _unpack('>H', ipv4_str[0:2])[0])
|
||||
tokens.append('%x' % _unpack('>H', ipv4_str[2:4])[0])
|
||||
|
||||
values = [_pack('>H', int(i, 16)) for i in tokens]
|
||||
else:
|
||||
# IPv6 verbose mode.
|
||||
if len(tokens) != 8:
|
||||
raise invalid_addr
|
||||
try:
|
||||
tokens = [int(token, 16) for token in tokens]
|
||||
for token in tokens:
|
||||
if not 0 <= token <= 0xffff:
|
||||
raise invalid_addr
|
||||
|
||||
except ValueError:
|
||||
raise invalid_addr
|
||||
|
||||
values = [_pack('>H', i) for i in tokens]
|
||||
else:
|
||||
raise invalid_addr
|
||||
|
||||
return _bytes_join(values)
|
||||
else:
|
||||
raise ValueError('Unknown address family %d' % af)
|
1972
vendor/netaddr/ip/__init__.py
vendored
Normal file
1972
vendor/netaddr/ip/__init__.py
vendored
Normal file
File diff suppressed because it is too large
Load Diff
312
vendor/netaddr/ip/glob.py
vendored
Normal file
312
vendor/netaddr/ip/glob.py
vendored
Normal file
@ -0,0 +1,312 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Routines and classes for supporting and expressing IP address ranges using a
|
||||
glob style syntax.
|
||||
|
||||
"""
|
||||
from netaddr.core import AddrFormatError, AddrConversionError
|
||||
from netaddr.ip import IPRange, IPAddress, IPNetwork, iprange_to_cidrs
|
||||
from netaddr.compat import _is_str
|
||||
|
||||
|
||||
def valid_glob(ipglob):
|
||||
"""
|
||||
:param ipglob: An IP address range in a glob-style format.
|
||||
|
||||
:return: ``True`` if IP range glob is valid, ``False`` otherwise.
|
||||
"""
|
||||
#TODO: Add support for abbreviated ipglobs.
|
||||
#TODO: e.g. 192.0.*.* == 192.0.*
|
||||
#TODO: *.*.*.* == *
|
||||
#TODO: Add strict flag to enable verbose ipglob checking.
|
||||
if not _is_str(ipglob):
|
||||
return False
|
||||
|
||||
seen_hyphen = False
|
||||
seen_asterisk = False
|
||||
|
||||
octets = ipglob.split('.')
|
||||
|
||||
if len(octets) != 4:
|
||||
return False
|
||||
|
||||
for octet in octets:
|
||||
if '-' in octet:
|
||||
if seen_hyphen:
|
||||
return False
|
||||
seen_hyphen = True
|
||||
if seen_asterisk:
|
||||
# Asterisks cannot precede hyphenated octets.
|
||||
return False
|
||||
try:
|
||||
(octet1, octet2) = [int(i) for i in octet.split('-')]
|
||||
except ValueError:
|
||||
return False
|
||||
if octet1 >= octet2:
|
||||
return False
|
||||
if not 0 <= octet1 <= 254:
|
||||
return False
|
||||
if not 1 <= octet2 <= 255:
|
||||
return False
|
||||
elif octet == '*':
|
||||
seen_asterisk = True
|
||||
else:
|
||||
if seen_hyphen is True:
|
||||
return False
|
||||
if seen_asterisk is True:
|
||||
return False
|
||||
try:
|
||||
if not 0 <= int(octet) <= 255:
|
||||
return False
|
||||
except ValueError:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def glob_to_iptuple(ipglob):
|
||||
"""
|
||||
A function that accepts a glob-style IP range and returns the component
|
||||
lower and upper bound IP address.
|
||||
|
||||
:param ipglob: an IP address range in a glob-style format.
|
||||
|
||||
:return: a tuple contain lower and upper bound IP objects.
|
||||
"""
|
||||
if not valid_glob(ipglob):
|
||||
raise AddrFormatError('not a recognised IP glob range: %r!' % (ipglob,))
|
||||
|
||||
start_tokens = []
|
||||
end_tokens = []
|
||||
|
||||
for octet in ipglob.split('.'):
|
||||
if '-' in octet:
|
||||
tokens = octet.split('-')
|
||||
start_tokens.append(tokens[0])
|
||||
end_tokens.append(tokens[1])
|
||||
elif octet == '*':
|
||||
start_tokens.append('0')
|
||||
end_tokens.append('255')
|
||||
else:
|
||||
start_tokens.append(octet)
|
||||
end_tokens.append(octet)
|
||||
|
||||
return IPAddress('.'.join(start_tokens)), IPAddress('.'.join(end_tokens))
|
||||
|
||||
|
||||
def glob_to_iprange(ipglob):
|
||||
"""
|
||||
A function that accepts a glob-style IP range and returns the equivalent
|
||||
IP range.
|
||||
|
||||
:param ipglob: an IP address range in a glob-style format.
|
||||
|
||||
:return: an IPRange object.
|
||||
"""
|
||||
if not valid_glob(ipglob):
|
||||
raise AddrFormatError('not a recognised IP glob range: %r!' % (ipglob,))
|
||||
|
||||
start_tokens = []
|
||||
end_tokens = []
|
||||
|
||||
for octet in ipglob.split('.'):
|
||||
if '-' in octet:
|
||||
tokens = octet.split('-')
|
||||
start_tokens.append(tokens[0])
|
||||
end_tokens.append(tokens[1])
|
||||
elif octet == '*':
|
||||
start_tokens.append('0')
|
||||
end_tokens.append('255')
|
||||
else:
|
||||
start_tokens.append(octet)
|
||||
end_tokens.append(octet)
|
||||
|
||||
return IPRange('.'.join(start_tokens), '.'.join(end_tokens))
|
||||
|
||||
|
||||
def iprange_to_globs(start, end):
|
||||
"""
|
||||
A function that accepts an arbitrary start and end IP address or subnet
|
||||
and returns one or more glob-style IP ranges.
|
||||
|
||||
:param start: the start IP address or subnet.
|
||||
|
||||
:param end: the end IP address or subnet.
|
||||
|
||||
:return: a list containing one or more IP globs.
|
||||
"""
|
||||
start = IPAddress(start)
|
||||
end = IPAddress(end)
|
||||
|
||||
if start.version != 4 and end.version != 4:
|
||||
raise AddrConversionError('IP glob ranges only support IPv4!')
|
||||
|
||||
def _iprange_to_glob(lb, ub):
|
||||
# Internal function to process individual IP globs.
|
||||
t1 = [int(_) for _ in str(lb).split('.')]
|
||||
t2 = [int(_) for _ in str(ub).split('.')]
|
||||
|
||||
tokens = []
|
||||
|
||||
seen_hyphen = False
|
||||
seen_asterisk = False
|
||||
|
||||
for i in range(4):
|
||||
if t1[i] == t2[i]:
|
||||
# A normal octet.
|
||||
tokens.append(str(t1[i]))
|
||||
elif (t1[i] == 0) and (t2[i] == 255):
|
||||
# An asterisk octet.
|
||||
tokens.append('*')
|
||||
seen_asterisk = True
|
||||
else:
|
||||
# Create a hyphenated octet - only one allowed per IP glob.
|
||||
if not seen_asterisk:
|
||||
if not seen_hyphen:
|
||||
tokens.append('%s-%s' % (t1[i], t2[i]))
|
||||
seen_hyphen = True
|
||||
else:
|
||||
raise AddrConversionError(
|
||||
'only 1 hyphenated octet per IP glob allowed!')
|
||||
else:
|
||||
raise AddrConversionError(
|
||||
"asterisks are not allowed before hyphenated octets!")
|
||||
|
||||
return '.'.join(tokens)
|
||||
|
||||
globs = []
|
||||
|
||||
try:
|
||||
# IP range can be represented by a single glob.
|
||||
ipglob = _iprange_to_glob(start, end)
|
||||
if not valid_glob(ipglob):
|
||||
#TODO: this is a workaround, it is produces non-optimal but valid
|
||||
#TODO: glob conversions. Fix inner function so that is always
|
||||
#TODO: produces a valid glob.
|
||||
raise AddrConversionError('invalid ip glob created')
|
||||
globs.append(ipglob)
|
||||
except AddrConversionError:
|
||||
# Break IP range up into CIDRs before conversion to globs.
|
||||
#
|
||||
#TODO: this is still not completely optimised but is good enough
|
||||
#TODO: for the moment.
|
||||
#
|
||||
for cidr in iprange_to_cidrs(start, end):
|
||||
ipglob = _iprange_to_glob(cidr[0], cidr[-1])
|
||||
globs.append(ipglob)
|
||||
|
||||
return globs
|
||||
|
||||
|
||||
def glob_to_cidrs(ipglob):
|
||||
"""
|
||||
A function that accepts a glob-style IP range and returns a list of one
|
||||
or more IP CIDRs that exactly matches it.
|
||||
|
||||
:param ipglob: an IP address range in a glob-style format.
|
||||
|
||||
:return: a list of one or more IP objects.
|
||||
"""
|
||||
return iprange_to_cidrs(*glob_to_iptuple(ipglob))
|
||||
|
||||
|
||||
def cidr_to_glob(cidr):
|
||||
"""
|
||||
A function that accepts an IP subnet in a glob-style format and returns
|
||||
a list of CIDR subnets that exactly matches the specified glob.
|
||||
|
||||
:param cidr: an IP object CIDR subnet.
|
||||
|
||||
:return: a list of one or more IP addresses and subnets.
|
||||
"""
|
||||
ip = IPNetwork(cidr)
|
||||
globs = iprange_to_globs(ip[0], ip[-1])
|
||||
if len(globs) != 1:
|
||||
# There should only ever be a one to one mapping between a CIDR and
|
||||
# an IP glob range.
|
||||
raise AddrConversionError('bad CIDR to IP glob conversion!')
|
||||
return globs[0]
|
||||
|
||||
|
||||
class IPGlob(IPRange):
|
||||
"""
|
||||
Represents an IP address range using a glob-style syntax ``x.x.x-y.*``
|
||||
|
||||
Individual octets can be represented using the following shortcuts :
|
||||
|
||||
1. ``*`` - the asterisk octet (represents values ``0`` through ``255``)
|
||||
2. ``x-y`` - the hyphenated octet (represents values ``x`` through ``y``)
|
||||
|
||||
A few basic rules also apply :
|
||||
|
||||
1. ``x`` must always be less than ``y``, therefore :
|
||||
|
||||
- ``x`` can only be ``0`` through ``254``
|
||||
- ``y`` can only be ``1`` through ``255``
|
||||
|
||||
2. only one hyphenated octet per IP glob is allowed
|
||||
3. only asterisks are permitted after a hyphenated octet
|
||||
|
||||
Examples:
|
||||
|
||||
+------------------+------------------------------+
|
||||
| IP glob | Description |
|
||||
+==================+==============================+
|
||||
| ``192.0.2.1`` | a single address |
|
||||
+------------------+------------------------------+
|
||||
| ``192.0.2.0-31`` | 32 addresses |
|
||||
+------------------+------------------------------+
|
||||
| ``192.0.2.*`` | 256 addresses |
|
||||
+------------------+------------------------------+
|
||||
| ``192.0.2-3.*`` | 512 addresses |
|
||||
+------------------+------------------------------+
|
||||
| ``192.0-1.*.*`` | 131,072 addresses |
|
||||
+------------------+------------------------------+
|
||||
| ``*.*.*.*`` | the whole IPv4 address space |
|
||||
+------------------+------------------------------+
|
||||
|
||||
.. note :: \
|
||||
IP glob ranges are not directly equivalent to CIDR blocks. \
|
||||
They can represent address ranges that do not fall on strict bit mask \
|
||||
boundaries. They are suitable for use in configuration files, being \
|
||||
more obvious and readable than their CIDR counterparts, especially for \
|
||||
admins and end users with little or no networking knowledge or \
|
||||
experience. All CIDR addresses can always be represented as IP globs \
|
||||
but the reverse is not always true.
|
||||
"""
|
||||
__slots__ = ('_glob',)
|
||||
|
||||
def __init__(self, ipglob):
|
||||
(start, end) = glob_to_iptuple(ipglob)
|
||||
super(IPGlob, self).__init__(start, end)
|
||||
self.glob = iprange_to_globs(self._start, self._end)[0]
|
||||
|
||||
def __getstate__(self):
|
||||
""":return: Pickled state of an `IPGlob` object."""
|
||||
return super(IPGlob, self).__getstate__()
|
||||
|
||||
def __setstate__(self, state):
|
||||
""":param state: data used to unpickle a pickled `IPGlob` object."""
|
||||
super(IPGlob, self).__setstate__(state)
|
||||
self.glob = iprange_to_globs(self._start, self._end)[0]
|
||||
|
||||
def _get_glob(self):
|
||||
return self._glob
|
||||
|
||||
def _set_glob(self, ipglob):
|
||||
(self._start, self._end) = glob_to_iptuple(ipglob)
|
||||
self._glob = iprange_to_globs(self._start, self._end)[0]
|
||||
|
||||
glob = property(_get_glob, _set_glob, None,
|
||||
'an arbitrary IP address range in glob format.')
|
||||
|
||||
def __str__(self):
|
||||
""":return: IP glob in common representational format."""
|
||||
return "%s" % self.glob
|
||||
|
||||
def __repr__(self):
|
||||
""":return: Python statement to create an equivalent object"""
|
||||
return "%s('%s')" % (self.__class__.__name__, self.glob)
|
448
vendor/netaddr/ip/iana.py
vendored
Normal file
448
vendor/netaddr/ip/iana.py
vendored
Normal file
@ -0,0 +1,448 @@
|
||||
#!/usr/bin/env python
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
#
|
||||
# DISCLAIMER
|
||||
#
|
||||
# netaddr is not sponsored nor endorsed by IANA.
|
||||
#
|
||||
# Use of data from IANA (Internet Assigned Numbers Authority) is subject to
|
||||
# copyright and is provided with prior written permission.
|
||||
#
|
||||
# IANA data files included with netaddr are not modified in any way but are
|
||||
# parsed and made available to end users through an API.
|
||||
#
|
||||
# See README file and source code for URLs to latest copies of the relevant
|
||||
# files.
|
||||
#
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Routines for accessing data published by IANA (Internet Assigned Numbers
|
||||
Authority).
|
||||
|
||||
More details can be found at the following URLs :-
|
||||
|
||||
- IANA Home Page - http://www.iana.org/
|
||||
- IEEE Protocols Information Home Page - http://www.iana.org/protocols/
|
||||
"""
|
||||
|
||||
import sys as _sys
|
||||
from xml.sax import make_parser, handler
|
||||
|
||||
from netaddr.core import Publisher, Subscriber
|
||||
from netaddr.ip import IPAddress, IPNetwork, IPRange, cidr_abbrev_to_verbose
|
||||
from netaddr.compat import _dict_items, _callable, _importlib_resources
|
||||
|
||||
|
||||
|
||||
#: Topic based lookup dictionary for IANA information.
|
||||
IANA_INFO = {
|
||||
'IPv4': {},
|
||||
'IPv6': {},
|
||||
'IPv6_unicast': {},
|
||||
'multicast': {},
|
||||
}
|
||||
|
||||
|
||||
class SaxRecordParser(handler.ContentHandler):
|
||||
def __init__(self, callback=None):
|
||||
self._level = 0
|
||||
self._is_active = False
|
||||
self._record = None
|
||||
self._tag_level = None
|
||||
self._tag_payload = None
|
||||
self._tag_feeding = None
|
||||
self._callback = callback
|
||||
|
||||
def startElement(self, name, attrs):
|
||||
self._level += 1
|
||||
|
||||
if self._is_active is False:
|
||||
if name == 'record':
|
||||
self._is_active = True
|
||||
self._tag_level = self._level
|
||||
self._record = {}
|
||||
if 'date' in attrs:
|
||||
self._record['date'] = attrs['date']
|
||||
elif self._level == self._tag_level + 1:
|
||||
if name == 'xref':
|
||||
if 'type' in attrs and 'data' in attrs:
|
||||
l = self._record.setdefault(attrs['type'], [])
|
||||
l.append(attrs['data'])
|
||||
else:
|
||||
self._tag_payload = []
|
||||
self._tag_feeding = True
|
||||
else:
|
||||
self._tag_feeding = False
|
||||
|
||||
def endElement(self, name):
|
||||
if self._is_active is True:
|
||||
if name == 'record' and self._tag_level == self._level:
|
||||
self._is_active = False
|
||||
self._tag_level = None
|
||||
if _callable(self._callback):
|
||||
self._callback(self._record)
|
||||
self._record = None
|
||||
elif self._level == self._tag_level + 1:
|
||||
if name != 'xref':
|
||||
self._record[name] = ''.join(self._tag_payload)
|
||||
self._tag_payload = None
|
||||
self._tag_feeding = False
|
||||
|
||||
self._level -= 1
|
||||
|
||||
def characters(self, content):
|
||||
if self._tag_feeding is True:
|
||||
self._tag_payload.append(content)
|
||||
|
||||
|
||||
class XMLRecordParser(Publisher):
|
||||
"""
|
||||
A configurable Parser that understands how to parse XML based records.
|
||||
"""
|
||||
|
||||
def __init__(self, fh, **kwargs):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
fh - a valid, open file handle to XML based record data.
|
||||
"""
|
||||
super(XMLRecordParser, self).__init__()
|
||||
|
||||
self.xmlparser = make_parser()
|
||||
self.xmlparser.setContentHandler(SaxRecordParser(self.consume_record))
|
||||
|
||||
self.fh = fh
|
||||
|
||||
self.__dict__.update(kwargs)
|
||||
|
||||
def process_record(self, rec):
|
||||
"""
|
||||
This is the callback method invoked for every record. It is usually
|
||||
over-ridden by base classes to provide specific record-based logic.
|
||||
|
||||
Any record can be vetoed (not passed to registered Subscriber objects)
|
||||
by simply returning None.
|
||||
"""
|
||||
return rec
|
||||
|
||||
def consume_record(self, rec):
|
||||
record = self.process_record(rec)
|
||||
if record is not None:
|
||||
self.notify(record)
|
||||
|
||||
def parse(self):
|
||||
"""
|
||||
Parse and normalises records, notifying registered subscribers with
|
||||
record data as it is encountered.
|
||||
"""
|
||||
self.xmlparser.parse(self.fh)
|
||||
|
||||
|
||||
class IPv4Parser(XMLRecordParser):
|
||||
"""
|
||||
A XMLRecordParser that understands how to parse and retrieve data records
|
||||
from the IANA IPv4 address space file.
|
||||
|
||||
It can be found online here :-
|
||||
|
||||
- http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xml
|
||||
"""
|
||||
|
||||
def __init__(self, fh, **kwargs):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
fh - a valid, open file handle to an IANA IPv4 address space file.
|
||||
|
||||
kwargs - additional parser options.
|
||||
"""
|
||||
super(IPv4Parser, self).__init__(fh)
|
||||
|
||||
def process_record(self, rec):
|
||||
"""
|
||||
Callback method invoked for every record.
|
||||
|
||||
See base class method for more details.
|
||||
"""
|
||||
|
||||
record = {}
|
||||
for key in ('prefix', 'designation', 'date', 'whois', 'status'):
|
||||
record[key] = str(rec.get(key, '')).strip()
|
||||
|
||||
# Strip leading zeros from octet.
|
||||
if '/' in record['prefix']:
|
||||
(octet, prefix) = record['prefix'].split('/')
|
||||
record['prefix'] = '%d/%d' % (int(octet), int(prefix))
|
||||
|
||||
record['status'] = record['status'].capitalize()
|
||||
|
||||
return record
|
||||
|
||||
|
||||
class IPv6Parser(XMLRecordParser):
|
||||
"""
|
||||
A XMLRecordParser that understands how to parse and retrieve data records
|
||||
from the IANA IPv6 address space file.
|
||||
|
||||
It can be found online here :-
|
||||
|
||||
- http://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xml
|
||||
"""
|
||||
|
||||
def __init__(self, fh, **kwargs):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
fh - a valid, open file handle to an IANA IPv6 address space file.
|
||||
|
||||
kwargs - additional parser options.
|
||||
"""
|
||||
super(IPv6Parser, self).__init__(fh)
|
||||
|
||||
def process_record(self, rec):
|
||||
"""
|
||||
Callback method invoked for every record.
|
||||
|
||||
See base class method for more details.
|
||||
"""
|
||||
|
||||
record = {
|
||||
'prefix': str(rec.get('prefix', '')).strip(),
|
||||
'allocation': str(rec.get('description', '')).strip(),
|
||||
# HACK: -1 instead of 0 is a hacky hack to get 4291 instead of 3513 from
|
||||
#
|
||||
# <xref type="rfc" data="rfc3513"/> was later obsoleted by <xref type="rfc" data="rfc4291"/>
|
||||
#
|
||||
# I imagine there's no way to solve this in a general way, maybe we should start returning a list
|
||||
# of RFC-s here?
|
||||
'reference': str(rec.get('rfc', [''])[-1]).strip(),
|
||||
}
|
||||
|
||||
return record
|
||||
|
||||
|
||||
class IPv6UnicastParser(XMLRecordParser):
|
||||
"""
|
||||
A XMLRecordParser that understands how to parse and retrieve data records
|
||||
from the IANA IPv6 unicast address assignments file.
|
||||
|
||||
It can be found online here :-
|
||||
|
||||
- http://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unicast-address-assignments.xml
|
||||
"""
|
||||
def __init__(self, fh, **kwargs):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
fh - a valid, open file handle to an IANA IPv6 address space file.
|
||||
|
||||
kwargs - additional parser options.
|
||||
"""
|
||||
super(IPv6UnicastParser, self).__init__(fh)
|
||||
|
||||
def process_record(self, rec):
|
||||
"""
|
||||
Callback method invoked for every record.
|
||||
|
||||
See base class method for more details.
|
||||
"""
|
||||
record = {
|
||||
'status': str(rec.get('status', '')).strip(),
|
||||
'description': str(rec.get('description', '')).strip(),
|
||||
'prefix': str(rec.get('prefix', '')).strip(),
|
||||
'date': str(rec.get('date', '')).strip(),
|
||||
'whois': str(rec.get('whois', '')).strip(),
|
||||
}
|
||||
|
||||
return record
|
||||
|
||||
|
||||
class MulticastParser(XMLRecordParser):
|
||||
"""
|
||||
A XMLRecordParser that knows how to process the IANA IPv4 multicast address
|
||||
allocation file.
|
||||
|
||||
It can be found online here :-
|
||||
|
||||
- http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xml
|
||||
"""
|
||||
|
||||
def __init__(self, fh, **kwargs):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
fh - a valid, open file handle to an IANA IPv4 multicast address
|
||||
allocation file.
|
||||
|
||||
kwargs - additional parser options.
|
||||
"""
|
||||
super(MulticastParser, self).__init__(fh)
|
||||
|
||||
def normalise_addr(self, addr):
|
||||
"""
|
||||
Removes variations from address entries found in this particular file.
|
||||
"""
|
||||
if '-' in addr:
|
||||
(a1, a2) = addr.split('-')
|
||||
o1 = a1.strip().split('.')
|
||||
o2 = a2.strip().split('.')
|
||||
return '%s-%s' % ('.'.join([str(int(i)) for i in o1]),
|
||||
'.'.join([str(int(i)) for i in o2]))
|
||||
else:
|
||||
o1 = addr.strip().split('.')
|
||||
return '.'.join([str(int(i)) for i in o1])
|
||||
|
||||
def process_record(self, rec):
|
||||
"""
|
||||
Callback method invoked for every record.
|
||||
|
||||
See base class method for more details.
|
||||
"""
|
||||
|
||||
if 'addr' in rec:
|
||||
record = {
|
||||
'address': self.normalise_addr(str(rec['addr'])),
|
||||
'descr': str(rec.get('description', '')),
|
||||
}
|
||||
return record
|
||||
|
||||
|
||||
class DictUpdater(Subscriber):
|
||||
"""
|
||||
Concrete Subscriber that inserts records received from a Publisher into a
|
||||
dictionary.
|
||||
"""
|
||||
|
||||
def __init__(self, dct, topic, unique_key):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
dct - lookup dict or dict like object to insert records into.
|
||||
|
||||
topic - high-level category name of data to be processed.
|
||||
|
||||
unique_key - key name in data dict that uniquely identifies it.
|
||||
"""
|
||||
self.dct = dct
|
||||
self.topic = topic
|
||||
self.unique_key = unique_key
|
||||
|
||||
def update(self, data):
|
||||
"""
|
||||
Callback function used by Publisher to notify this Subscriber about
|
||||
an update. Stores topic based information into dictionary passed to
|
||||
constructor.
|
||||
"""
|
||||
data_id = data[self.unique_key]
|
||||
|
||||
if self.topic == 'IPv4':
|
||||
cidr = IPNetwork(cidr_abbrev_to_verbose(data_id))
|
||||
self.dct[cidr] = data
|
||||
elif self.topic == 'IPv6':
|
||||
cidr = IPNetwork(cidr_abbrev_to_verbose(data_id))
|
||||
self.dct[cidr] = data
|
||||
elif self.topic == 'IPv6_unicast':
|
||||
cidr = IPNetwork(data_id)
|
||||
self.dct[cidr] = data
|
||||
elif self.topic == 'multicast':
|
||||
iprange = None
|
||||
if '-' in data_id:
|
||||
# See if we can manage a single CIDR.
|
||||
(first, last) = data_id.split('-')
|
||||
iprange = IPRange(first, last)
|
||||
cidrs = iprange.cidrs()
|
||||
if len(cidrs) == 1:
|
||||
iprange = cidrs[0]
|
||||
else:
|
||||
iprange = IPAddress(data_id)
|
||||
self.dct[iprange] = data
|
||||
|
||||
|
||||
def load_info():
|
||||
"""
|
||||
Parse and load internal IANA data lookups with the latest information from
|
||||
data files.
|
||||
"""
|
||||
ipv4 = IPv4Parser(_importlib_resources.open_binary(__package__, 'ipv4-address-space.xml'))
|
||||
ipv4.attach(DictUpdater(IANA_INFO['IPv4'], 'IPv4', 'prefix'))
|
||||
ipv4.parse()
|
||||
|
||||
ipv6 = IPv6Parser(_importlib_resources.open_binary(__package__, 'ipv6-address-space.xml'))
|
||||
ipv6.attach(DictUpdater(IANA_INFO['IPv6'], 'IPv6', 'prefix'))
|
||||
ipv6.parse()
|
||||
|
||||
ipv6ua = IPv6UnicastParser(
|
||||
_importlib_resources.open_binary(__package__, 'ipv6-unicast-address-assignments.xml'),
|
||||
)
|
||||
ipv6ua.attach(DictUpdater(IANA_INFO['IPv6_unicast'], 'IPv6_unicast', 'prefix'))
|
||||
ipv6ua.parse()
|
||||
|
||||
mcast = MulticastParser(_importlib_resources.open_binary(__package__, 'multicast-addresses.xml'))
|
||||
mcast.attach(DictUpdater(IANA_INFO['multicast'], 'multicast', 'address'))
|
||||
mcast.parse()
|
||||
|
||||
|
||||
def pprint_info(fh=None):
|
||||
"""
|
||||
Pretty prints IANA information to filehandle.
|
||||
"""
|
||||
if fh is None:
|
||||
fh = _sys.stdout
|
||||
|
||||
for category in sorted(IANA_INFO):
|
||||
fh.write('-' * len(category) + "\n")
|
||||
fh.write(category + "\n")
|
||||
fh.write('-' * len(category) + "\n")
|
||||
ipranges = IANA_INFO[category]
|
||||
for iprange in sorted(ipranges):
|
||||
details = ipranges[iprange]
|
||||
fh.write('%-45r' % (iprange) + details + "\n")
|
||||
|
||||
|
||||
def _within_bounds(ip, ip_range):
|
||||
# Boundary checking for multiple IP classes.
|
||||
if hasattr(ip_range, 'first'):
|
||||
# IP network or IP range.
|
||||
return ip in ip_range
|
||||
elif hasattr(ip_range, 'value'):
|
||||
# IP address.
|
||||
return ip == ip_range
|
||||
|
||||
raise Exception('Unsupported IP range or address: %r!' % (ip_range,))
|
||||
|
||||
|
||||
def query(ip_addr):
|
||||
"""Returns informational data specific to this IP address."""
|
||||
info = {}
|
||||
|
||||
if ip_addr.version == 4:
|
||||
for cidr, record in _dict_items(IANA_INFO['IPv4']):
|
||||
if _within_bounds(ip_addr, cidr):
|
||||
info.setdefault('IPv4', [])
|
||||
info['IPv4'].append(record)
|
||||
|
||||
if ip_addr.is_multicast():
|
||||
for iprange, record in _dict_items(IANA_INFO['multicast']):
|
||||
if _within_bounds(ip_addr, iprange):
|
||||
info.setdefault('Multicast', [])
|
||||
info['Multicast'].append(record)
|
||||
|
||||
elif ip_addr.version == 6:
|
||||
for cidr, record in _dict_items(IANA_INFO['IPv6']):
|
||||
if _within_bounds(ip_addr, cidr):
|
||||
info.setdefault('IPv6', [])
|
||||
info['IPv6'].append(record)
|
||||
|
||||
for cidr, record in _dict_items(IANA_INFO['IPv6_unicast']):
|
||||
if _within_bounds(ip_addr, cidr):
|
||||
info.setdefault('IPv6_unicast', [])
|
||||
info['IPv6_unicast'].append(record)
|
||||
|
||||
return info
|
||||
|
||||
# On module import, read IANA data files and populate lookups dict.
|
||||
load_info()
|
2644
vendor/netaddr/ip/ipv4-address-space.xml
vendored
Normal file
2644
vendor/netaddr/ip/ipv4-address-space.xml
vendored
Normal file
File diff suppressed because it is too large
Load Diff
198
vendor/netaddr/ip/ipv6-address-space.xml
vendored
Normal file
198
vendor/netaddr/ip/ipv6-address-space.xml
vendored
Normal file
@ -0,0 +1,198 @@
|
||||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<?xml-stylesheet type="text/xsl" href="ipv6-address-space.xsl"?>
|
||||
<?oxygen RNGSchema="ipv6-address-space.rng" type="xml"?>
|
||||
<registry xmlns="http://www.iana.org/assignments" id="ipv6-address-space">
|
||||
<title>Internet Protocol Version 6 Address Space</title>
|
||||
<updated>2019-09-13</updated>
|
||||
<note>The IPv6 address management function was formally delegated to
|
||||
IANA in December 1995 <xref type="rfc" data="rfc1881"/>. The registration procedure
|
||||
was confirmed with the IETF Chair in March 2010.
|
||||
|
||||
As stated in RFC3513, IANA should limit its allocation of IPv6-unicast
|
||||
address space to the range of addresses that start with binary value 001.
|
||||
The rest of the global unicast address space (approximately 85% of the IPv6
|
||||
address space) is reserved for future definition and use, and is not to be
|
||||
assigned by IANA at this time.
|
||||
|
||||
While <xref type="rfc" data="rfc3513"/> was obsoleted by <xref type="rfc" data="rfc4291"/>,
|
||||
the guidiance provided to IANA did not change regarding the allocation of IPv6
|
||||
unicast addresses.
|
||||
|
||||
</note>
|
||||
<registry id="ipv6-address-space-1">
|
||||
<registration_rule>IESG Approval</registration_rule>
|
||||
<record>
|
||||
<prefix>0000::/8</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes>
|
||||
<xref type="note" data="1"/>
|
||||
<xref type="note" data="2"/>
|
||||
<xref type="note" data="3"/>
|
||||
<xref type="note" data="4"/>
|
||||
<xref type="note" data="5"/>
|
||||
<xref type="note" data="6"/>
|
||||
</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>0100::/8</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes>0100::/64 reserved for Discard-Only Address Block <xref type="rfc" data="rfc6666"/>.
|
||||
Complete registration details are found in <xref type="registry" data="iana-ipv6-special-registry"/>.</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>0200::/7</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc4048"/>
|
||||
<notes>Deprecated as of December 2004 <xref type="rfc" data="rfc4048"/>.
|
||||
Formerly an OSI NSAP-mapped prefix set <xref type="rfc" data="rfc4548"/>.</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>0400::/6</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>0800::/5</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>1000::/4</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>2000::/3</prefix>
|
||||
<description>Global Unicast</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes>The IPv6 Unicast space encompasses the entire IPv6 address range
|
||||
with the exception of ff00::/8, per <xref type="rfc" data="rfc4291"/>. IANA unicast address
|
||||
assignments are currently limited to the IPv6 unicast address
|
||||
range of 2000::/3. IANA assignments from this block are registered
|
||||
in <xref type="registry" data="ipv6-unicast-address-assignments"/>.
|
||||
<xref type="note" data="7"/>
|
||||
<xref type="note" data="8"/>
|
||||
<xref type="note" data="9"/>
|
||||
<xref type="note" data="10"/>
|
||||
<xref type="note" data="11"/>
|
||||
<xref type="note" data="12"/>
|
||||
<xref type="note" data="13"/>
|
||||
<xref type="note" data="14"/>
|
||||
<xref type="note" data="15"/>
|
||||
</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>4000::/3</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>6000::/3</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>8000::/3</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>a000::/3</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>c000::/3</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>e000::/4</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>f000::/5</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>f800::/6</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>fc00::/7</prefix>
|
||||
<description>Unique Local Unicast</description>
|
||||
<xref type="rfc" data="rfc4193"/>
|
||||
<notes>For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>fe00::/9</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes/>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>fe80::/10</prefix>
|
||||
<description>Link-Scoped Unicast</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes>Reserved by protocol. For authoritative registration, see <xref type="registry" data="iana-ipv6-special-registry"/>.</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>fec0::/10</prefix>
|
||||
<description>Reserved by IETF</description>
|
||||
<xref type="rfc" data="rfc3879"/>
|
||||
<notes>Deprecated by <xref type="rfc" data="rfc3879"/> in September 2004. Formerly a Site-Local scoped address prefix.</notes>
|
||||
</record>
|
||||
<record>
|
||||
<prefix>ff00::/8</prefix>
|
||||
<description>Multicast</description>
|
||||
<xref type="rfc" data="rfc3513"/><xref type="rfc" data="rfc4291"/>
|
||||
<notes>IANA assignments from this block are registered in <xref type="registry" data="ipv6-multicast-addresses"/>.</notes>
|
||||
</record>
|
||||
<footnote anchor="1">::1/128 reserved for Loopback Address <xref type="rfc" data="rfc4291"/>.
|
||||
Reserved by protocol. For authoritative registration, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="2">::/128 reserved for Unspecified Address <xref type="rfc" data="rfc4291"/>.
|
||||
Reserved by protocol. For authoritative registration, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="3">::ffff:0:0/96 reserved for IPv4-mapped Address <xref type="rfc" data="rfc4291"/>.
|
||||
Reserved by protocol. For authoritative registration, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="4">0::/96 deprecated by <xref type="rfc" data="rfc4291"/>. Formerly defined as the "IPv4-compatible IPv6 address" prefix.</footnote>
|
||||
<footnote anchor="5">The "Well Known Prefix" 64:ff9b::/96 is used in an algorithmic mapping between IPv4 to IPv6 addresses <xref type="rfc" data="rfc6052"/>.</footnote>
|
||||
<footnote anchor="6">64:ff9b:1::/48 reserved for Local-Use IPv4/IPv6 Translation <xref type="rfc" data="rfc8215"/>.</footnote>
|
||||
<footnote anchor="7">2001:0::/23 reserved for IETF Protocol Assignments <xref type="rfc" data="rfc2928"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="8">2001:0::/32 reserved for TEREDO <xref type="rfc" data="rfc4380"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="9">2001:2::/48 reserved for Benchmarking <xref type="rfc" data="rfc5180"/><xref type="rfc-errata" data="1752"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="10">2001:3::/32 reserved for AMT <xref type="rfc" data="rfc7450"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="11">2001:4:112::/48 reserved for AS112-v6 <xref type="rfc" data="rfc7535"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="12">2001:10::/28 deprecated (formerly ORCHID) <xref type="rfc" data="rfc4843"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="13">2001:20::/28 reserved for ORCHIDv2 <xref type="rfc" data="rfc7343"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="14">2001:db8::/32 reserved for Documentation <xref type="rfc" data="rfc3849"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
<footnote anchor="15">2002::/16 reserved for 6to4 <xref type="rfc" data="rfc3056"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</footnote>
|
||||
|
||||
|
||||
<people/>
|
||||
</registry>
|
||||
</registry>
|
435
vendor/netaddr/ip/ipv6-unicast-address-assignments.xml
vendored
Normal file
435
vendor/netaddr/ip/ipv6-unicast-address-assignments.xml
vendored
Normal file
@ -0,0 +1,435 @@
|
||||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<?xml-stylesheet type="text/xsl" href="ipv6-unicast-address-assignments.xsl"?>
|
||||
<?oxygen RNGSchema="ipv6-unicast-address-assignments.rng" type="xml"?>
|
||||
<registry xmlns="http://www.iana.org/assignments" id="ipv6-unicast-address-assignments">
|
||||
<title>IPv6 Global Unicast Address Assignments</title>
|
||||
<category>Internet Protocol version 6 (IPv6) Global Unicast Allocations</category>
|
||||
<updated>2019-11-06</updated>
|
||||
<xref type="rfc" data="rfc7249"/>
|
||||
<registration_rule>Allocations to RIRs are made in line with the Global Policy published at
|
||||
<xref type="uri" data="http://www.icann.org/en/resources/policy/global-addressing"/>.
|
||||
All other assignments require IETF Review.</registration_rule>
|
||||
<description>The allocation of Internet Protocol version 6 (IPv6) unicast address space is listed
|
||||
here. References to the various other registries detailing the use of the IPv6 address
|
||||
space can be found in the <xref type="registry" data="ipv6-address-space">IPv6 Address Space registry</xref>.</description>
|
||||
<note>The assignable Global Unicast Address space is defined in <xref type="rfc" data="rfc3513"/> as the address block
|
||||
defined by the prefix 2000::/3. <xref type="rfc" data="rfc3513"/> was later obsoleted by <xref type="rfc" data="rfc4291"/>. All address
|
||||
space in this block not listed in the table below is reserved by IANA for future
|
||||
allocation.
|
||||
</note>
|
||||
<record date="1999-07-01">
|
||||
<prefix>2001:0000::/23</prefix>
|
||||
<description>IANA</description>
|
||||
<whois>whois.iana.org</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<notes>2001:0000::/23 is reserved for IETF Protocol Assignments <xref type="rfc" data="rfc2928"/>.
|
||||
2001:0000::/32 is reserved for TEREDO <xref type="rfc" data="rfc4380"/>.
|
||||
2001:1::1/128 is reserved for Port Control Protocol Anycast <xref type="rfc" data="rfc7723"/>.
|
||||
2001:2::/48 is reserved for Benchmarking <xref type="rfc" data="rfc5180"/><xref type="rfc-errata" data="1752"/>.
|
||||
2001:3::/32 is reserved for AMT <xref type="rfc" data="rfc7450"/>.
|
||||
2001:4:112::/48 is reserved for AS112-v6 <xref type="rfc" data="rfc7535"/>.
|
||||
2001:10::/28 is deprecated (previously ORCHID) <xref type="rfc" data="rfc4843"/>.
|
||||
2001:20::/28 is reserved for ORCHIDv2 <xref type="rfc" data="rfc7343"/>.
|
||||
2001:db8::/32 is reserved for Documentation <xref type="rfc" data="rfc3849"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</notes>
|
||||
</record>
|
||||
<record date="1999-07-01">
|
||||
<prefix>2001:0200::/23</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="1999-07-01">
|
||||
<prefix>2001:0400::/23</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="1999-07-01">
|
||||
<prefix>2001:0600::/23</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
|
||||
<record date="2002-11-02">
|
||||
<prefix>2001:0800::/22</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes>2001:0800::/23 was allocated on 2002-05-02. The more recent
|
||||
allocation (2002-11-02) incorporates the previous allocation.</notes>
|
||||
</record>
|
||||
<record date="2002-05-02">
|
||||
<prefix>2001:0c00::/23</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes>2001:db8::/32 reserved for Documentation <xref type="rfc" data="rfc3849"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</notes>
|
||||
</record>
|
||||
<record date="2003-01-01">
|
||||
<prefix>2001:0e00::/23</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2002-11-01">
|
||||
<prefix>2001:1200::/23</prefix>
|
||||
<description>LACNIC</description>
|
||||
<whois>whois.lacnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.lacnic.net/rdap/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
|
||||
<record date="2003-07-01">
|
||||
<prefix>2001:1400::/22</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes>2001:1400::/23 was allocated on 2003-02-01. The more recent
|
||||
allocation (2003-07-01) incorporates the previous allocation.</notes>
|
||||
</record>
|
||||
<record date="2003-04-01">
|
||||
<prefix>2001:1800::/23</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-01-01">
|
||||
<prefix>2001:1a00::/23</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-05-04">
|
||||
<prefix>2001:1c00::/22</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
|
||||
<record date="2019-03-12">
|
||||
<prefix>2001:2000::/19</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes>2001:2000::/20, 2001:3000::/21, and 2001:3800::/22
|
||||
were allocated on 2004-05-04. The more recent allocation
|
||||
(2019-03-12) incorporates all these previous allocations.</notes>
|
||||
</record>
|
||||
<record date="2004-06-11">
|
||||
<prefix>2001:4000::/23</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-06-01">
|
||||
<prefix>2001:4200::/23</prefix>
|
||||
<description>AFRINIC</description>
|
||||
<whois>whois.afrinic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.afrinic.net/rdap/</server>
|
||||
<server>http://rdap.afrinic.net/rdap/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-06-11">
|
||||
<prefix>2001:4400::/23</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-08-17">
|
||||
<prefix>2001:4600::/23</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-08-24">
|
||||
<prefix>2001:4800::/23</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-10-15">
|
||||
<prefix>2001:4a00::/23</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-12-17">
|
||||
<prefix>2001:4c00::/23</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-09-10">
|
||||
<prefix>2001:5000::/20</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-11-30">
|
||||
<prefix>2001:8000::/19</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2004-11-30">
|
||||
<prefix>2001:a000::/20</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2006-03-08">
|
||||
<prefix>2001:b000::/20</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2001-02-01">
|
||||
<prefix>2002:0000::/16</prefix>
|
||||
<description>6to4</description>
|
||||
<whois/>
|
||||
<status>ALLOCATED</status>
|
||||
<notes>2002::/16 is reserved for 6to4 <xref type="rfc" data="rfc3056"/>.
|
||||
For complete registration details, see <xref type="registry" data="iana-ipv6-special-registry"/>.</notes>
|
||||
</record>
|
||||
<record date="2005-01-12">
|
||||
<prefix>2003:0000::/18</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2006-10-03">
|
||||
<prefix>2400:0000::/12</prefix>
|
||||
<description>APNIC</description>
|
||||
<whois>whois.apnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.apnic.net/</server>
|
||||
</rdap>
|
||||
<notes>2400:0000::/19 was allocated on 2005-05-20. 2400:2000::/19 was allocated on 2005-07-08. 2400:4000::/21 was
|
||||
allocated on 2005-08-08. 2404:0000::/23 was allocated on 2006-01-19. The more recent allocation (2006-10-03)
|
||||
incorporates all these previous allocations.</notes>
|
||||
</record>
|
||||
<record date="2006-10-03">
|
||||
<prefix>2600:0000::/12</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes>2600:0000::/22, 2604:0000::/22, 2608:0000::/22 and 260c:0000::/22 were allocated on 2005-04-19. The more
|
||||
recent allocation (2006-10-03) incorporates all these previous allocations.</notes>
|
||||
</record>
|
||||
<record date="2005-11-17">
|
||||
<prefix>2610:0000::/23</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2006-09-12">
|
||||
<prefix>2620:0000::/23</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2019-11-06">
|
||||
<prefix>2630:0000::/12</prefix>
|
||||
<description>ARIN</description>
|
||||
<whois>whois.arin.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.arin.net/registry</server>
|
||||
<server>http://rdap.arin.net/registry</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2006-10-03">
|
||||
<prefix>2800:0000::/12</prefix>
|
||||
<description>LACNIC</description>
|
||||
<whois>whois.lacnic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.lacnic.net/rdap/</server>
|
||||
</rdap>
|
||||
<notes>2800:0000::/23 was allocated on 2005-11-17. The more recent allocation (2006-10-03) incorporates the
|
||||
previous allocation.</notes>
|
||||
</record>
|
||||
<record date="2006-10-03">
|
||||
<prefix>2a00:0000::/12</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes>2a00:0000::/21 was originally allocated on 2005-04-19. 2a01:0000::/23 was allocated on 2005-07-14.
|
||||
2a01:0000::/16 (incorporating the 2a01:0000::/23) was allocated on 2005-12-15. The more recent allocation
|
||||
(2006-10-03) incorporates these previous allocations.</notes>
|
||||
</record>
|
||||
<record date="2019-06-05">
|
||||
<prefix>2a10:0000::/12</prefix>
|
||||
<description>RIPE NCC</description>
|
||||
<whois>whois.ripe.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.db.ripe.net/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2006-10-03">
|
||||
<prefix>2c00:0000::/12</prefix>
|
||||
<description>AFRINIC</description>
|
||||
<whois>whois.afrinic.net</whois>
|
||||
<status>ALLOCATED</status>
|
||||
<rdap>
|
||||
<server>https://rdap.afrinic.net/rdap/</server>
|
||||
<server>http://rdap.afrinic.net/rdap/</server>
|
||||
</rdap>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="1999-07-01">
|
||||
<prefix>2d00:0000::/8</prefix>
|
||||
<description>IANA</description>
|
||||
<whois/>
|
||||
<status>RESERVED</status>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="1999-07-01">
|
||||
<prefix>2e00:0000::/7</prefix>
|
||||
<description>IANA</description>
|
||||
<whois/>
|
||||
<status>RESERVED</status>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="1999-07-01">
|
||||
<prefix>3000:0000::/4</prefix>
|
||||
<description>IANA</description>
|
||||
<whois/>
|
||||
<status>RESERVED</status>
|
||||
<notes/>
|
||||
</record>
|
||||
<record date="2008-04">
|
||||
<prefix>3ffe::/16</prefix>
|
||||
<description>IANA</description>
|
||||
<whois/>
|
||||
<status>RESERVED</status>
|
||||
<notes>3ffe:831f::/32 was used for Teredo in some old but widely distributed networking stacks. This usage is
|
||||
deprecated in favor of 2001::/32, which was allocated for the purpose in <xref type="rfc" data="rfc4380"/>.
|
||||
3ffe::/16 and 5f00::/8 were used for the 6bone but were returned. <xref type="rfc" data="rfc5156"/></notes>
|
||||
</record>
|
||||
<record date="2008-04">
|
||||
<prefix>5f00::/8</prefix>
|
||||
<description>IANA</description>
|
||||
<whois/>
|
||||
<status>RESERVED</status>
|
||||
<notes>3ffe::/16 and 5f00::/8 were used for the 6bone but were returned. <xref type="rfc" data="rfc5156"/></notes>
|
||||
</record>
|
||||
<people/>
|
||||
</registry>
|
4441
vendor/netaddr/ip/multicast-addresses.xml
vendored
Normal file
4441
vendor/netaddr/ip/multicast-addresses.xml
vendored
Normal file
File diff suppressed because it is too large
Load Diff
117
vendor/netaddr/ip/nmap.py
vendored
Normal file
117
vendor/netaddr/ip/nmap.py
vendored
Normal file
@ -0,0 +1,117 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Routines for dealing with nmap-style IPv4 address ranges.
|
||||
|
||||
Based on nmap's Target Specification :-
|
||||
|
||||
http://nmap.org/book/man-target-specification.html
|
||||
"""
|
||||
|
||||
from netaddr.core import AddrFormatError
|
||||
from netaddr.ip import IPAddress, IPNetwork
|
||||
from netaddr.compat import _iter_range, _is_str, _iter_next
|
||||
|
||||
|
||||
def _nmap_octet_target_values(spec):
|
||||
# Generates sequence of values for an individual octet as defined in the
|
||||
# nmap Target Specification.
|
||||
values = set()
|
||||
|
||||
for element in spec.split(','):
|
||||
if '-' in element:
|
||||
left, right = element.split('-', 1)
|
||||
if not left:
|
||||
left = 0
|
||||
if not right:
|
||||
right = 255
|
||||
low = int(left)
|
||||
high = int(right)
|
||||
if not ((0 <= low <= 255) and (0 <= high <= 255)):
|
||||
raise ValueError('octet value overflow for spec %s!' % (spec,))
|
||||
if low > high:
|
||||
raise ValueError('left side of hyphen must be <= right %r' % (element,))
|
||||
for octet in _iter_range(low, high + 1):
|
||||
values.add(octet)
|
||||
else:
|
||||
octet = int(element)
|
||||
if not (0 <= octet <= 255):
|
||||
raise ValueError('octet value overflow for spec %s!' % (spec,))
|
||||
values.add(octet)
|
||||
|
||||
return sorted(values)
|
||||
|
||||
|
||||
def _generate_nmap_octet_ranges(nmap_target_spec):
|
||||
# Generate 4 lists containing all octets defined by a given nmap Target
|
||||
# specification.
|
||||
if not _is_str(nmap_target_spec):
|
||||
raise TypeError('string expected, not %s' % type(nmap_target_spec))
|
||||
|
||||
if not nmap_target_spec:
|
||||
raise ValueError('nmap target specification cannot be blank!')
|
||||
|
||||
tokens = nmap_target_spec.split('.')
|
||||
|
||||
if len(tokens) != 4:
|
||||
raise AddrFormatError('invalid nmap range: %s' % (nmap_target_spec,))
|
||||
|
||||
return (_nmap_octet_target_values(tokens[0]),
|
||||
_nmap_octet_target_values(tokens[1]),
|
||||
_nmap_octet_target_values(tokens[2]),
|
||||
_nmap_octet_target_values(tokens[3]))
|
||||
|
||||
|
||||
def _parse_nmap_target_spec(target_spec):
|
||||
if '/' in target_spec:
|
||||
_, prefix = target_spec.split('/', 1)
|
||||
if not (0 < int(prefix) < 33):
|
||||
raise AddrFormatError('CIDR prefix expected, not %s' % (prefix,))
|
||||
net = IPNetwork(target_spec)
|
||||
if net.version != 4:
|
||||
raise AddrFormatError('CIDR only support for IPv4!')
|
||||
for ip in net:
|
||||
yield ip
|
||||
elif ':' in target_spec:
|
||||
# nmap only currently supports IPv6 addresses without prefixes.
|
||||
yield IPAddress(target_spec)
|
||||
else:
|
||||
octet_ranges = _generate_nmap_octet_ranges(target_spec)
|
||||
for w in octet_ranges[0]:
|
||||
for x in octet_ranges[1]:
|
||||
for y in octet_ranges[2]:
|
||||
for z in octet_ranges[3]:
|
||||
yield IPAddress("%d.%d.%d.%d" % (w, x, y, z), 4)
|
||||
|
||||
|
||||
def valid_nmap_range(target_spec):
|
||||
"""
|
||||
:param target_spec: an nmap-style IP range target specification.
|
||||
|
||||
:return: ``True`` if IP range target spec is valid, ``False`` otherwise.
|
||||
"""
|
||||
try:
|
||||
_iter_next(_parse_nmap_target_spec(target_spec))
|
||||
return True
|
||||
except (TypeError, ValueError, AddrFormatError):
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
def iter_nmap_range(*nmap_target_spec):
|
||||
"""
|
||||
An generator that yields IPAddress objects from defined by nmap target
|
||||
specifications.
|
||||
|
||||
See https://nmap.org/book/man-target-specification.html for details.
|
||||
|
||||
:param *nmap_target_spec: one or more nmap IP range target specification.
|
||||
|
||||
:return: an iterator producing IPAddress objects for each IP in the target spec(s).
|
||||
"""
|
||||
for target_spec in nmap_target_spec:
|
||||
for addr in _parse_nmap_target_spec(target_spec):
|
||||
yield addr
|
61
vendor/netaddr/ip/rfc1924.py
vendored
Normal file
61
vendor/netaddr/ip/rfc1924.py
vendored
Normal file
@ -0,0 +1,61 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""A basic implementation of RFC 1924 ;-)"""
|
||||
|
||||
from netaddr.core import AddrFormatError
|
||||
from netaddr.ip import IPAddress
|
||||
|
||||
from netaddr.compat import _zip
|
||||
|
||||
|
||||
def chr_range(low, high):
|
||||
"""Returns all characters between low and high chars."""
|
||||
return [chr(i) for i in range(ord(low), ord(high) + 1)]
|
||||
|
||||
#: Base 85 integer index to character lookup table.
|
||||
BASE_85 = (
|
||||
chr_range('0', '9') + chr_range('A', 'Z') +
|
||||
chr_range('a', 'z') +
|
||||
['!', '#', '$', '%', '&', '(', ')', '*', '+', '-', ';', '<', '=', '>',
|
||||
'?', '@', '^', '_', '`', '{', '|', '}', '~']
|
||||
)
|
||||
|
||||
#: Base 85 digit to integer lookup table.
|
||||
BASE_85_DICT = dict(_zip(BASE_85, range(0, 86)))
|
||||
|
||||
|
||||
def ipv6_to_base85(addr):
|
||||
"""Convert a regular IPv6 address to base 85."""
|
||||
ip = IPAddress(addr)
|
||||
int_val = int(ip)
|
||||
|
||||
remainder = []
|
||||
while int_val > 0:
|
||||
remainder.append(int_val % 85)
|
||||
int_val //= 85
|
||||
|
||||
encoded = ''.join([BASE_85[w] for w in reversed(remainder)])
|
||||
leading_zeroes = (20 - len(encoded)) * "0"
|
||||
return leading_zeroes + encoded
|
||||
|
||||
|
||||
def base85_to_ipv6(addr):
|
||||
"""
|
||||
Convert a base 85 IPv6 address to its hexadecimal format.
|
||||
"""
|
||||
tokens = list(addr)
|
||||
|
||||
if len(tokens) != 20:
|
||||
raise AddrFormatError('Invalid base 85 IPv6 address: %r' % (addr,))
|
||||
|
||||
result = 0
|
||||
for i, num in enumerate(reversed(tokens)):
|
||||
num = BASE_85_DICT[num]
|
||||
result += (num * 85 ** i)
|
||||
|
||||
ip = IPAddress(result, 6)
|
||||
|
||||
return str(ip)
|
748
vendor/netaddr/ip/sets.py
vendored
Normal file
748
vendor/netaddr/ip/sets.py
vendored
Normal file
@ -0,0 +1,748 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""Set based operations for IP addresses and subnets."""
|
||||
|
||||
import itertools as _itertools
|
||||
|
||||
from netaddr.ip import (IPNetwork, IPAddress, IPRange, cidr_merge,
|
||||
cidr_exclude, iprange_to_cidrs)
|
||||
|
||||
from netaddr.compat import _sys_maxint, _dict_keys, _int_type
|
||||
|
||||
|
||||
def _subtract(supernet, subnets, subnet_idx, ranges):
|
||||
"""Calculate IPSet([supernet]) - IPSet(subnets).
|
||||
|
||||
Assumptions: subnets is sorted, subnet_idx points to the first
|
||||
element in subnets that is a subnet of supernet.
|
||||
|
||||
Results are appended to the ranges parameter as tuples of in format
|
||||
(version, first, last). Return value is the first subnet_idx that
|
||||
does not point to a subnet of supernet (or len(subnets) if all
|
||||
subsequents items are a subnet of supernet).
|
||||
"""
|
||||
version = supernet._module.version
|
||||
subnet = subnets[subnet_idx]
|
||||
if subnet.first > supernet.first:
|
||||
ranges.append((version, supernet.first, subnet.first - 1))
|
||||
|
||||
subnet_idx += 1
|
||||
prev_subnet = subnet
|
||||
while subnet_idx < len(subnets):
|
||||
cur_subnet = subnets[subnet_idx]
|
||||
|
||||
if cur_subnet not in supernet:
|
||||
break
|
||||
if prev_subnet.last + 1 == cur_subnet.first:
|
||||
# two adjacent, non-mergable IPNetworks
|
||||
pass
|
||||
else:
|
||||
ranges.append((version, prev_subnet.last + 1, cur_subnet.first - 1))
|
||||
|
||||
subnet_idx += 1
|
||||
prev_subnet = cur_subnet
|
||||
|
||||
first = prev_subnet.last + 1
|
||||
last = supernet.last
|
||||
if first <= last:
|
||||
ranges.append((version, first, last))
|
||||
|
||||
return subnet_idx
|
||||
|
||||
|
||||
def _iter_merged_ranges(sorted_ranges):
|
||||
"""Iterate over sorted_ranges, merging where possible
|
||||
|
||||
Sorted ranges must be a sorted iterable of (version, first, last) tuples.
|
||||
Merging occurs for pairs like [(4, 10, 42), (4, 43, 100)] which is merged
|
||||
into (4, 10, 100), and leads to return value
|
||||
( IPAddress(10, 4), IPAddress(100, 4) ), which is suitable input for the
|
||||
iprange_to_cidrs function.
|
||||
"""
|
||||
if not sorted_ranges:
|
||||
return
|
||||
|
||||
current_version, current_start, current_stop = sorted_ranges[0]
|
||||
|
||||
for next_version, next_start, next_stop in sorted_ranges[1:]:
|
||||
if next_start == current_stop + 1 and next_version == current_version:
|
||||
# Can be merged.
|
||||
current_stop = next_stop
|
||||
continue
|
||||
# Cannot be merged.
|
||||
yield (IPAddress(current_start, current_version),
|
||||
IPAddress(current_stop, current_version))
|
||||
current_start = next_start
|
||||
current_stop = next_stop
|
||||
current_version = next_version
|
||||
yield (IPAddress(current_start, current_version),
|
||||
IPAddress(current_stop, current_version))
|
||||
|
||||
|
||||
class IPSet(object):
|
||||
"""
|
||||
Represents an unordered collection (set) of unique IP addresses and
|
||||
subnets.
|
||||
|
||||
"""
|
||||
__slots__ = ('_cidrs', '__weakref__')
|
||||
|
||||
def __init__(self, iterable=None, flags=0):
|
||||
"""
|
||||
Constructor.
|
||||
|
||||
:param iterable: (optional) an iterable containing IP addresses,
|
||||
subnets or ranges.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation
|
||||
of the addr value. See the netaddr.core namespace documentation
|
||||
for supported constant values.
|
||||
|
||||
"""
|
||||
if isinstance(iterable, IPNetwork):
|
||||
self._cidrs = {iterable.cidr: True}
|
||||
elif isinstance(iterable, IPRange):
|
||||
self._cidrs = dict.fromkeys(
|
||||
iprange_to_cidrs(iterable[0], iterable[-1]), True)
|
||||
elif isinstance(iterable, IPSet):
|
||||
self._cidrs = dict.fromkeys(iterable.iter_cidrs(), True)
|
||||
else:
|
||||
self._cidrs = {}
|
||||
if iterable is not None:
|
||||
mergeable = []
|
||||
for addr in iterable:
|
||||
if isinstance(addr, _int_type):
|
||||
addr = IPAddress(addr, flags=flags)
|
||||
mergeable.append(addr)
|
||||
|
||||
for cidr in cidr_merge(mergeable):
|
||||
self._cidrs[cidr] = True
|
||||
|
||||
def __getstate__(self):
|
||||
""":return: Pickled state of an ``IPSet`` object."""
|
||||
return tuple([cidr.__getstate__() for cidr in self._cidrs])
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""
|
||||
:param state: data used to unpickle a pickled ``IPSet`` object.
|
||||
|
||||
"""
|
||||
self._cidrs = dict.fromkeys(
|
||||
(IPNetwork((value, prefixlen), version=version)
|
||||
for value, prefixlen, version in state),
|
||||
True)
|
||||
|
||||
def _compact_single_network(self, added_network):
|
||||
"""
|
||||
Same as compact(), but assume that added_network is the only change and
|
||||
that this IPSet was properly compacted before added_network was added.
|
||||
This allows to perform compaction much faster. added_network must
|
||||
already be present in self._cidrs.
|
||||
"""
|
||||
added_first = added_network.first
|
||||
added_last = added_network.last
|
||||
added_version = added_network.version
|
||||
|
||||
# Check for supernets and subnets of added_network.
|
||||
if added_network._prefixlen == added_network._module.width:
|
||||
# This is a single IP address, i.e. /32 for IPv4 or /128 for IPv6.
|
||||
# It does not have any subnets, so we only need to check for its
|
||||
# potential supernets.
|
||||
for potential_supernet in added_network.supernet():
|
||||
if potential_supernet in self._cidrs:
|
||||
del self._cidrs[added_network]
|
||||
return
|
||||
else:
|
||||
# IPNetworks from self._cidrs that are subnets of added_network.
|
||||
to_remove = []
|
||||
for cidr in self._cidrs:
|
||||
if (cidr._module.version != added_version or cidr == added_network):
|
||||
# We found added_network or some network of a different version.
|
||||
continue
|
||||
first = cidr.first
|
||||
last = cidr.last
|
||||
if first >= added_first and last <= added_last:
|
||||
# cidr is a subnet of added_network. Remember to remove it.
|
||||
to_remove.append(cidr)
|
||||
elif first <= added_first and last >= added_last:
|
||||
# cidr is a supernet of added_network. Remove added_network.
|
||||
del self._cidrs[added_network]
|
||||
# This IPSet was properly compacted before. Since added_network
|
||||
# is removed now, it must again be properly compacted -> done.
|
||||
assert (not to_remove)
|
||||
return
|
||||
for item in to_remove:
|
||||
del self._cidrs[item]
|
||||
|
||||
# Check if added_network can be merged with another network.
|
||||
|
||||
# Note that merging can only happen between networks of the same
|
||||
# prefixlen. This just leaves 2 candidates: The IPNetworks just before
|
||||
# and just after the added_network.
|
||||
# This can be reduced to 1 candidate: 10.0.0.0/24 and 10.0.1.0/24 can
|
||||
# be merged into into 10.0.0.0/23. But 10.0.1.0/24 and 10.0.2.0/24
|
||||
# cannot be merged. With only 1 candidate, we might as well make a
|
||||
# dictionary lookup.
|
||||
shift_width = added_network._module.width - added_network.prefixlen
|
||||
while added_network.prefixlen != 0:
|
||||
# figure out if the least significant bit of the network part is 0 or 1.
|
||||
the_bit = (added_network._value >> shift_width) & 1
|
||||
if the_bit:
|
||||
candidate = added_network.previous()
|
||||
else:
|
||||
candidate = added_network.next()
|
||||
|
||||
if candidate not in self._cidrs:
|
||||
# The only possible merge does not work -> merge done
|
||||
return
|
||||
# Remove added_network&candidate, add merged network.
|
||||
del self._cidrs[candidate]
|
||||
del self._cidrs[added_network]
|
||||
added_network.prefixlen -= 1
|
||||
# Be sure that we set the host bits to 0 when we move the prefixlen.
|
||||
# Otherwise, adding 255.255.255.255/32 will result in a merged
|
||||
# 255.255.255.255/24 network, but we want 255.255.255.0/24.
|
||||
shift_width += 1
|
||||
added_network._value = (added_network._value >> shift_width) << shift_width
|
||||
self._cidrs[added_network] = True
|
||||
|
||||
def compact(self):
|
||||
"""
|
||||
Compact internal list of `IPNetwork` objects using a CIDR merge.
|
||||
"""
|
||||
cidrs = cidr_merge(self._cidrs)
|
||||
self._cidrs = dict.fromkeys(cidrs, True)
|
||||
|
||||
def __hash__(self):
|
||||
"""
|
||||
Raises ``TypeError`` if this method is called.
|
||||
|
||||
.. note:: IPSet objects are not hashable and cannot be used as \
|
||||
dictionary keys or as members of other sets. \
|
||||
"""
|
||||
raise TypeError('IP sets are unhashable!')
|
||||
|
||||
def __contains__(self, ip):
|
||||
"""
|
||||
:param ip: An IP address or subnet.
|
||||
|
||||
:return: ``True`` if IP address or subnet is a member of this IP set.
|
||||
"""
|
||||
# Iterating over self._cidrs is an O(n) operation: 1000 items in
|
||||
# self._cidrs would mean 1000 loops. Iterating over all possible
|
||||
# supernets loops at most 32 times for IPv4 or 128 times for IPv6,
|
||||
# no matter how many CIDRs this object contains.
|
||||
supernet = IPNetwork(ip)
|
||||
if supernet in self._cidrs:
|
||||
return True
|
||||
while supernet._prefixlen:
|
||||
supernet._prefixlen -= 1
|
||||
if supernet in self._cidrs:
|
||||
return True
|
||||
return False
|
||||
|
||||
def __nonzero__(self):
|
||||
"""Return True if IPSet contains at least one IP, else False"""
|
||||
return bool(self._cidrs)
|
||||
|
||||
__bool__ = __nonzero__ # Python 3.x.
|
||||
|
||||
def __iter__(self):
|
||||
"""
|
||||
:return: an iterator over the IP addresses within this IP set.
|
||||
"""
|
||||
return _itertools.chain(*sorted(self._cidrs))
|
||||
|
||||
def iter_cidrs(self):
|
||||
"""
|
||||
:return: an iterator over individual IP subnets within this IP set.
|
||||
"""
|
||||
return sorted(self._cidrs)
|
||||
|
||||
def add(self, addr, flags=0):
|
||||
"""
|
||||
Adds an IP address or subnet or IPRange to this IP set. Has no effect if
|
||||
it is already present.
|
||||
|
||||
Note that where possible the IP address or subnet is merged with other
|
||||
members of the set to form more concise CIDR blocks.
|
||||
|
||||
:param addr: An IP address or subnet in either string or object form, or
|
||||
an IPRange object.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation
|
||||
of the addr value. See the netaddr.core namespace documentation
|
||||
for supported constant values.
|
||||
|
||||
"""
|
||||
if isinstance(addr, IPRange):
|
||||
new_cidrs = dict.fromkeys(
|
||||
iprange_to_cidrs(addr[0], addr[-1]), True)
|
||||
self._cidrs.update(new_cidrs)
|
||||
self.compact()
|
||||
return
|
||||
if isinstance(addr, IPNetwork):
|
||||
# Networks like 10.1.2.3/8 need to be normalized to 10.0.0.0/8
|
||||
addr = addr.cidr
|
||||
elif isinstance(addr, _int_type):
|
||||
addr = IPNetwork(IPAddress(addr, flags=flags))
|
||||
else:
|
||||
addr = IPNetwork(addr)
|
||||
|
||||
self._cidrs[addr] = True
|
||||
self._compact_single_network(addr)
|
||||
|
||||
def remove(self, addr, flags=0):
|
||||
"""
|
||||
Removes an IP address or subnet or IPRange from this IP set. Does
|
||||
nothing if it is not already a member.
|
||||
|
||||
Note that this method behaves more like discard() found in regular
|
||||
Python sets because it doesn't raise KeyError exceptions if the
|
||||
IP address or subnet is question does not exist. It doesn't make sense
|
||||
to fully emulate that behaviour here as IP sets contain groups of
|
||||
individual IP addresses as individual set members using IPNetwork
|
||||
objects.
|
||||
|
||||
:param addr: An IP address or subnet, or an IPRange.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation
|
||||
of the addr value. See the netaddr.core namespace documentation
|
||||
for supported constant values.
|
||||
|
||||
"""
|
||||
if isinstance(addr, IPRange):
|
||||
cidrs = iprange_to_cidrs(addr[0], addr[-1])
|
||||
for cidr in cidrs:
|
||||
self.remove(cidr)
|
||||
return
|
||||
|
||||
if isinstance(addr, _int_type):
|
||||
addr = IPAddress(addr, flags=flags)
|
||||
else:
|
||||
addr = IPNetwork(addr)
|
||||
|
||||
# This add() is required for address blocks provided that are larger
|
||||
# than blocks found within the set but have overlaps. e.g. :-
|
||||
#
|
||||
# >>> IPSet(['192.0.2.0/24']).remove('192.0.2.0/23')
|
||||
# IPSet([])
|
||||
#
|
||||
self.add(addr)
|
||||
|
||||
remainder = None
|
||||
matching_cidr = None
|
||||
|
||||
# Search for a matching CIDR and exclude IP from it.
|
||||
for cidr in self._cidrs:
|
||||
if addr in cidr:
|
||||
remainder = cidr_exclude(cidr, addr)
|
||||
matching_cidr = cidr
|
||||
break
|
||||
|
||||
# Replace matching CIDR with remaining CIDR elements.
|
||||
if remainder is not None:
|
||||
del self._cidrs[matching_cidr]
|
||||
for cidr in remainder:
|
||||
self._cidrs[cidr] = True
|
||||
# No call to self.compact() is needed. Removing an IPNetwork cannot
|
||||
# create mergable networks.
|
||||
|
||||
def pop(self):
|
||||
"""
|
||||
Removes and returns an arbitrary IP address or subnet from this IP
|
||||
set.
|
||||
|
||||
:return: An IP address or subnet.
|
||||
"""
|
||||
return self._cidrs.popitem()[0]
|
||||
|
||||
def isdisjoint(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: ``True`` if this IP set has no elements (IP addresses
|
||||
or subnets) in common with other. Intersection *must* be an
|
||||
empty set.
|
||||
"""
|
||||
result = self.intersection(other)
|
||||
return not result
|
||||
|
||||
def copy(self):
|
||||
""":return: a shallow copy of this IP set."""
|
||||
obj_copy = self.__class__()
|
||||
obj_copy._cidrs.update(self._cidrs)
|
||||
return obj_copy
|
||||
|
||||
def update(self, iterable, flags=0):
|
||||
"""
|
||||
Update the contents of this IP set with the union of itself and
|
||||
other IP set.
|
||||
|
||||
:param iterable: an iterable containing IP addresses, subnets or ranges.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation
|
||||
of the addr value. See the netaddr.core namespace documentation
|
||||
for supported constant values.
|
||||
|
||||
"""
|
||||
if isinstance(iterable, IPSet):
|
||||
self._cidrs = dict.fromkeys(
|
||||
(ip for ip in cidr_merge(_dict_keys(self._cidrs)
|
||||
+ _dict_keys(iterable._cidrs))), True)
|
||||
return
|
||||
elif isinstance(iterable, (IPNetwork, IPRange)):
|
||||
self.add(iterable)
|
||||
return
|
||||
|
||||
if not hasattr(iterable, '__iter__'):
|
||||
raise TypeError('an iterable was expected!')
|
||||
# An iterable containing IP addresses or subnets.
|
||||
mergeable = []
|
||||
for addr in iterable:
|
||||
if isinstance(addr, _int_type):
|
||||
addr = IPAddress(addr, flags=flags)
|
||||
mergeable.append(addr)
|
||||
|
||||
for cidr in cidr_merge(_dict_keys(self._cidrs) + mergeable):
|
||||
self._cidrs[cidr] = True
|
||||
|
||||
self.compact()
|
||||
|
||||
def clear(self):
|
||||
"""Remove all IP addresses and subnets from this IP set."""
|
||||
self._cidrs = {}
|
||||
|
||||
def __eq__(self, other):
|
||||
"""
|
||||
:param other: an IP set
|
||||
|
||||
:return: ``True`` if this IP set is equivalent to the ``other`` IP set,
|
||||
``False`` otherwise.
|
||||
"""
|
||||
try:
|
||||
return self._cidrs == other._cidrs
|
||||
except AttributeError:
|
||||
return NotImplemented
|
||||
|
||||
def __ne__(self, other):
|
||||
"""
|
||||
:param other: an IP set
|
||||
|
||||
:return: ``False`` if this IP set is equivalent to the ``other`` IP set,
|
||||
``True`` otherwise.
|
||||
"""
|
||||
try:
|
||||
return self._cidrs != other._cidrs
|
||||
except AttributeError:
|
||||
return NotImplemented
|
||||
|
||||
def __lt__(self, other):
|
||||
"""
|
||||
:param other: an IP set
|
||||
|
||||
:return: ``True`` if this IP set is less than the ``other`` IP set,
|
||||
``False`` otherwise.
|
||||
"""
|
||||
if not hasattr(other, '_cidrs'):
|
||||
return NotImplemented
|
||||
|
||||
return self.size < other.size and self.issubset(other)
|
||||
|
||||
def issubset(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: ``True`` if every IP address and subnet in this IP set
|
||||
is found within ``other``.
|
||||
"""
|
||||
for cidr in self._cidrs:
|
||||
if cidr not in other:
|
||||
return False
|
||||
return True
|
||||
|
||||
__le__ = issubset
|
||||
|
||||
def __gt__(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: ``True`` if this IP set is greater than the ``other`` IP set,
|
||||
``False`` otherwise.
|
||||
"""
|
||||
if not hasattr(other, '_cidrs'):
|
||||
return NotImplemented
|
||||
|
||||
return self.size > other.size and self.issuperset(other)
|
||||
|
||||
def issuperset(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: ``True`` if every IP address and subnet in other IP set
|
||||
is found within this one.
|
||||
"""
|
||||
if not hasattr(other, '_cidrs'):
|
||||
return NotImplemented
|
||||
|
||||
for cidr in other._cidrs:
|
||||
if cidr not in self:
|
||||
return False
|
||||
return True
|
||||
|
||||
__ge__ = issuperset
|
||||
|
||||
def union(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: the union of this IP set and another as a new IP set
|
||||
(combines IP addresses and subnets from both sets).
|
||||
"""
|
||||
ip_set = self.copy()
|
||||
ip_set.update(other)
|
||||
return ip_set
|
||||
|
||||
__or__ = union
|
||||
|
||||
def intersection(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: the intersection of this IP set and another as a new IP set.
|
||||
(IP addresses and subnets common to both sets).
|
||||
"""
|
||||
result_cidrs = {}
|
||||
|
||||
own_nets = sorted(self._cidrs)
|
||||
other_nets = sorted(other._cidrs)
|
||||
own_idx = 0
|
||||
other_idx = 0
|
||||
own_len = len(own_nets)
|
||||
other_len = len(other_nets)
|
||||
while own_idx < own_len and other_idx < other_len:
|
||||
own_cur = own_nets[own_idx]
|
||||
other_cur = other_nets[other_idx]
|
||||
|
||||
if own_cur == other_cur:
|
||||
result_cidrs[own_cur] = True
|
||||
own_idx += 1
|
||||
other_idx += 1
|
||||
elif own_cur in other_cur:
|
||||
result_cidrs[own_cur] = True
|
||||
own_idx += 1
|
||||
elif other_cur in own_cur:
|
||||
result_cidrs[other_cur] = True
|
||||
other_idx += 1
|
||||
else:
|
||||
# own_cur and other_cur have nothing in common
|
||||
if own_cur < other_cur:
|
||||
own_idx += 1
|
||||
else:
|
||||
other_idx += 1
|
||||
|
||||
# We ran out of networks in own_nets or other_nets. Either way, there
|
||||
# can be no further result_cidrs.
|
||||
result = IPSet()
|
||||
result._cidrs = result_cidrs
|
||||
return result
|
||||
|
||||
__and__ = intersection
|
||||
|
||||
def symmetric_difference(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: the symmetric difference of this IP set and another as a new
|
||||
IP set (all IP addresses and subnets that are in exactly one
|
||||
of the sets).
|
||||
"""
|
||||
# In contrast to intersection() and difference(), we cannot construct
|
||||
# the result_cidrs easily. Some cidrs may have to be merged, e.g. for
|
||||
# IPSet(["10.0.0.0/32"]).symmetric_difference(IPSet(["10.0.0.1/32"])).
|
||||
result_ranges = []
|
||||
|
||||
own_nets = sorted(self._cidrs)
|
||||
other_nets = sorted(other._cidrs)
|
||||
own_idx = 0
|
||||
other_idx = 0
|
||||
own_len = len(own_nets)
|
||||
other_len = len(other_nets)
|
||||
while own_idx < own_len and other_idx < other_len:
|
||||
own_cur = own_nets[own_idx]
|
||||
other_cur = other_nets[other_idx]
|
||||
|
||||
if own_cur == other_cur:
|
||||
own_idx += 1
|
||||
other_idx += 1
|
||||
elif own_cur in other_cur:
|
||||
own_idx = _subtract(other_cur, own_nets, own_idx, result_ranges)
|
||||
other_idx += 1
|
||||
elif other_cur in own_cur:
|
||||
other_idx = _subtract(own_cur, other_nets, other_idx, result_ranges)
|
||||
own_idx += 1
|
||||
else:
|
||||
# own_cur and other_cur have nothing in common
|
||||
if own_cur < other_cur:
|
||||
result_ranges.append((own_cur._module.version,
|
||||
own_cur.first, own_cur.last))
|
||||
own_idx += 1
|
||||
else:
|
||||
result_ranges.append((other_cur._module.version,
|
||||
other_cur.first, other_cur.last))
|
||||
other_idx += 1
|
||||
|
||||
# If the above loop terminated because it processed all cidrs of
|
||||
# "other", then any remaining cidrs in self must be part of the result.
|
||||
while own_idx < own_len:
|
||||
own_cur = own_nets[own_idx]
|
||||
result_ranges.append((own_cur._module.version,
|
||||
own_cur.first, own_cur.last))
|
||||
own_idx += 1
|
||||
|
||||
# If the above loop terminated because it processed all cidrs of
|
||||
# self, then any remaining cidrs in "other" must be part of the result.
|
||||
while other_idx < other_len:
|
||||
other_cur = other_nets[other_idx]
|
||||
result_ranges.append((other_cur._module.version,
|
||||
other_cur.first, other_cur.last))
|
||||
other_idx += 1
|
||||
|
||||
result = IPSet()
|
||||
for start, stop in _iter_merged_ranges(result_ranges):
|
||||
cidrs = iprange_to_cidrs(start, stop)
|
||||
for cidr in cidrs:
|
||||
result._cidrs[cidr] = True
|
||||
return result
|
||||
|
||||
__xor__ = symmetric_difference
|
||||
|
||||
def difference(self, other):
|
||||
"""
|
||||
:param other: an IP set.
|
||||
|
||||
:return: the difference between this IP set and another as a new IP
|
||||
set (all IP addresses and subnets that are in this IP set but
|
||||
not found in the other.)
|
||||
"""
|
||||
result_ranges = []
|
||||
result_cidrs = {}
|
||||
|
||||
own_nets = sorted(self._cidrs)
|
||||
other_nets = sorted(other._cidrs)
|
||||
own_idx = 0
|
||||
other_idx = 0
|
||||
own_len = len(own_nets)
|
||||
other_len = len(other_nets)
|
||||
while own_idx < own_len and other_idx < other_len:
|
||||
own_cur = own_nets[own_idx]
|
||||
other_cur = other_nets[other_idx]
|
||||
|
||||
if own_cur == other_cur:
|
||||
own_idx += 1
|
||||
other_idx += 1
|
||||
elif own_cur in other_cur:
|
||||
own_idx += 1
|
||||
elif other_cur in own_cur:
|
||||
other_idx = _subtract(own_cur, other_nets, other_idx,
|
||||
result_ranges)
|
||||
own_idx += 1
|
||||
else:
|
||||
# own_cur and other_cur have nothing in common
|
||||
if own_cur < other_cur:
|
||||
result_cidrs[own_cur] = True
|
||||
own_idx += 1
|
||||
else:
|
||||
other_idx += 1
|
||||
|
||||
# If the above loop terminated because it processed all cidrs of
|
||||
# "other", then any remaining cidrs in self must be part of the result.
|
||||
while own_idx < own_len:
|
||||
result_cidrs[own_nets[own_idx]] = True
|
||||
own_idx += 1
|
||||
|
||||
for start, stop in _iter_merged_ranges(result_ranges):
|
||||
for cidr in iprange_to_cidrs(start, stop):
|
||||
result_cidrs[cidr] = True
|
||||
|
||||
result = IPSet()
|
||||
result._cidrs = result_cidrs
|
||||
return result
|
||||
|
||||
__sub__ = difference
|
||||
|
||||
def __len__(self):
|
||||
"""
|
||||
:return: the cardinality of this IP set (i.e. sum of individual IP \
|
||||
addresses). Raises ``IndexError`` if size > maxint (a Python \
|
||||
limitation). Use the .size property for subnets of any size.
|
||||
"""
|
||||
size = self.size
|
||||
if size > _sys_maxint:
|
||||
raise IndexError(
|
||||
"range contains more than %d (sys.maxint) IP addresses!"
|
||||
"Use the .size property instead." % _sys_maxint)
|
||||
return size
|
||||
|
||||
@property
|
||||
def size(self):
|
||||
"""
|
||||
The cardinality of this IP set (based on the number of individual IP
|
||||
addresses including those implicitly defined in subnets).
|
||||
"""
|
||||
return sum([cidr.size for cidr in self._cidrs])
|
||||
|
||||
def __repr__(self):
|
||||
""":return: Python statement to create an equivalent object"""
|
||||
return 'IPSet(%r)' % [str(c) for c in sorted(self._cidrs)]
|
||||
|
||||
__str__ = __repr__
|
||||
|
||||
def iscontiguous(self):
|
||||
"""
|
||||
Returns True if the members of the set form a contiguous IP
|
||||
address range (with no gaps), False otherwise.
|
||||
|
||||
:return: ``True`` if the ``IPSet`` object is contiguous.
|
||||
"""
|
||||
cidrs = self.iter_cidrs()
|
||||
if len(cidrs) > 1:
|
||||
previous = cidrs[0][0]
|
||||
for cidr in cidrs:
|
||||
if cidr[0] != previous:
|
||||
return False
|
||||
previous = cidr[-1] + 1
|
||||
return True
|
||||
|
||||
def iprange(self):
|
||||
"""
|
||||
Generates an IPRange for this IPSet, if all its members
|
||||
form a single contiguous sequence.
|
||||
|
||||
Raises ``ValueError`` if the set is not contiguous.
|
||||
|
||||
:return: An ``IPRange`` for all IPs in the IPSet.
|
||||
"""
|
||||
if self.iscontiguous():
|
||||
cidrs = self.iter_cidrs()
|
||||
if not cidrs:
|
||||
return None
|
||||
return IPRange(cidrs[0][0], cidrs[-1][-1])
|
||||
else:
|
||||
raise ValueError("IPSet is not contiguous")
|
||||
|
||||
def iter_ipranges(self):
|
||||
"""Generate the merged IPRanges for this IPSet.
|
||||
|
||||
In contrast to self.iprange(), this will work even when the IPSet is
|
||||
not contiguous. Adjacent IPRanges will be merged together, so you
|
||||
get the minimal number of IPRanges.
|
||||
"""
|
||||
sorted_ranges = [(cidr._module.version, cidr.first, cidr.last) for
|
||||
cidr in self.iter_cidrs()]
|
||||
|
||||
for start, stop in _iter_merged_ranges(sorted_ranges):
|
||||
yield IPRange(start, stop)
|
273
vendor/netaddr/strategy/__init__.py
vendored
Normal file
273
vendor/netaddr/strategy/__init__.py
vendored
Normal file
@ -0,0 +1,273 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
Shared logic for various address types.
|
||||
"""
|
||||
import re as _re
|
||||
|
||||
from netaddr.compat import _range, _is_str
|
||||
|
||||
|
||||
def bytes_to_bits():
|
||||
"""
|
||||
:return: A 256 element list containing 8-bit binary digit strings. The
|
||||
list index value is equivalent to its bit string value.
|
||||
"""
|
||||
lookup = []
|
||||
bits_per_byte = _range(7, -1, -1)
|
||||
for num in range(256):
|
||||
bits = 8 * [None]
|
||||
for i in bits_per_byte:
|
||||
bits[i] = '01'[num & 1]
|
||||
num >>= 1
|
||||
lookup.append(''.join(bits))
|
||||
return lookup
|
||||
|
||||
#: A lookup table of 8-bit integer values to their binary digit bit strings.
|
||||
BYTES_TO_BITS = bytes_to_bits()
|
||||
|
||||
|
||||
def valid_words(words, word_size, num_words):
|
||||
"""
|
||||
:param words: A sequence of unsigned integer word values.
|
||||
|
||||
:param word_size: Width (in bits) of each unsigned integer word value.
|
||||
|
||||
:param num_words: Number of unsigned integer words expected.
|
||||
|
||||
:return: ``True`` if word sequence is valid for this address type,
|
||||
``False`` otherwise.
|
||||
"""
|
||||
if not hasattr(words, '__iter__'):
|
||||
return False
|
||||
|
||||
if len(words) != num_words:
|
||||
return False
|
||||
|
||||
max_word = 2 ** word_size - 1
|
||||
|
||||
for i in words:
|
||||
if not 0 <= i <= max_word:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def int_to_words(int_val, word_size, num_words):
|
||||
"""
|
||||
:param int_val: Unsigned integer to be divided into words of equal size.
|
||||
|
||||
:param word_size: Width (in bits) of each unsigned integer word value.
|
||||
|
||||
:param num_words: Number of unsigned integer words expected.
|
||||
|
||||
:return: A tuple contain unsigned integer word values split according
|
||||
to provided arguments.
|
||||
"""
|
||||
max_int = 2 ** (num_words * word_size) - 1
|
||||
|
||||
if not 0 <= int_val <= max_int:
|
||||
raise IndexError('integer out of bounds: %r!' % hex(int_val))
|
||||
|
||||
max_word = 2 ** word_size - 1
|
||||
|
||||
words = []
|
||||
for _ in range(num_words):
|
||||
word = int_val & max_word
|
||||
words.append(int(word))
|
||||
int_val >>= word_size
|
||||
|
||||
return tuple(reversed(words))
|
||||
|
||||
|
||||
def words_to_int(words, word_size, num_words):
|
||||
"""
|
||||
:param words: A sequence of unsigned integer word values.
|
||||
|
||||
:param word_size: Width (in bits) of each unsigned integer word value.
|
||||
|
||||
:param num_words: Number of unsigned integer words expected.
|
||||
|
||||
:return: An unsigned integer that is equivalent to value represented
|
||||
by word sequence.
|
||||
"""
|
||||
if not valid_words(words, word_size, num_words):
|
||||
raise ValueError('invalid integer word sequence: %r!' % (words,))
|
||||
|
||||
int_val = 0
|
||||
for i, num in enumerate(reversed(words)):
|
||||
word = num
|
||||
word = word << word_size * i
|
||||
int_val = int_val | word
|
||||
|
||||
return int_val
|
||||
|
||||
|
||||
def valid_bits(bits, width, word_sep=''):
|
||||
"""
|
||||
:param bits: A network address in a delimited binary string format.
|
||||
|
||||
:param width: Maximum width (in bits) of a network address (excluding
|
||||
delimiters).
|
||||
|
||||
:param word_sep: (optional) character or string used to delimit word
|
||||
groups (default: '', no separator).
|
||||
|
||||
:return: ``True`` if network address is valid, ``False`` otherwise.
|
||||
"""
|
||||
if not _is_str(bits):
|
||||
return False
|
||||
|
||||
if word_sep != '':
|
||||
bits = bits.replace(word_sep, '')
|
||||
|
||||
if len(bits) != width:
|
||||
return False
|
||||
|
||||
max_int = 2 ** width - 1
|
||||
|
||||
try:
|
||||
if 0 <= int(bits, 2) <= max_int:
|
||||
return True
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def bits_to_int(bits, width, word_sep=''):
|
||||
"""
|
||||
:param bits: A network address in a delimited binary string format.
|
||||
|
||||
:param width: Maximum width (in bits) of a network address (excluding
|
||||
delimiters).
|
||||
|
||||
:param word_sep: (optional) character or string used to delimit word
|
||||
groups (default: '', no separator).
|
||||
|
||||
:return: An unsigned integer that is equivalent to value represented
|
||||
by network address in readable binary form.
|
||||
"""
|
||||
if not valid_bits(bits, width, word_sep):
|
||||
raise ValueError('invalid readable binary string: %r!' % (bits,))
|
||||
|
||||
if word_sep != '':
|
||||
bits = bits.replace(word_sep, '')
|
||||
|
||||
return int(bits, 2)
|
||||
|
||||
|
||||
def int_to_bits(int_val, word_size, num_words, word_sep=''):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:param word_size: Width (in bits) of each unsigned integer word value.
|
||||
|
||||
:param num_words: Number of unsigned integer words expected.
|
||||
|
||||
:param word_sep: (optional) character or string used to delimit word
|
||||
groups (default: '', no separator).
|
||||
|
||||
:return: A network address in a delimited binary string format that is
|
||||
equivalent in value to unsigned integer.
|
||||
"""
|
||||
bit_words = []
|
||||
|
||||
for word in int_to_words(int_val, word_size, num_words):
|
||||
bits = []
|
||||
while word:
|
||||
bits.append(BYTES_TO_BITS[word & 255])
|
||||
word >>= 8
|
||||
bits.reverse()
|
||||
bit_str = ''.join(bits) or '0' * word_size
|
||||
bits = ('0' * word_size + bit_str)[-word_size:]
|
||||
bit_words.append(bits)
|
||||
|
||||
if word_sep != '':
|
||||
# Check custom separator.
|
||||
if not _is_str(word_sep):
|
||||
raise ValueError('word separator is not a string: %r!' % (word_sep,))
|
||||
|
||||
return word_sep.join(bit_words)
|
||||
|
||||
|
||||
def valid_bin(bin_val, width):
|
||||
"""
|
||||
:param bin_val: A network address in Python's binary representation format
|
||||
('0bxxx').
|
||||
|
||||
:param width: Maximum width (in bits) of a network address (excluding
|
||||
delimiters).
|
||||
|
||||
:return: ``True`` if network address is valid, ``False`` otherwise.
|
||||
"""
|
||||
if not _is_str(bin_val):
|
||||
return False
|
||||
|
||||
if not bin_val.startswith('0b'):
|
||||
return False
|
||||
|
||||
bin_val = bin_val.replace('0b', '')
|
||||
|
||||
if len(bin_val) > width:
|
||||
return False
|
||||
|
||||
max_int = 2 ** width - 1
|
||||
|
||||
try:
|
||||
if 0 <= int(bin_val, 2) <= max_int:
|
||||
return True
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def int_to_bin(int_val, width):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:param width: Maximum allowed width (in bits) of a unsigned integer.
|
||||
|
||||
:return: Equivalent string value in Python's binary representation format
|
||||
('0bxxx').
|
||||
"""
|
||||
bin_tokens = []
|
||||
|
||||
try:
|
||||
# Python 2.6.x and upwards.
|
||||
bin_val = bin(int_val)
|
||||
except NameError:
|
||||
# Python 2.4.x and 2.5.x
|
||||
i = int_val
|
||||
while i > 0:
|
||||
word = i & 0xff
|
||||
bin_tokens.append(BYTES_TO_BITS[word])
|
||||
i >>= 8
|
||||
|
||||
bin_tokens.reverse()
|
||||
bin_val = '0b' + _re.sub(r'^[0]+([01]+)$', r'\1', ''.join(bin_tokens))
|
||||
|
||||
if len(bin_val[2:]) > width:
|
||||
raise IndexError('binary string out of bounds: %s!' % (bin_val,))
|
||||
|
||||
return bin_val
|
||||
|
||||
|
||||
def bin_to_int(bin_val, width):
|
||||
"""
|
||||
:param bin_val: A string containing an unsigned integer in Python's binary
|
||||
representation format ('0bxxx').
|
||||
|
||||
:param width: Maximum allowed width (in bits) of a unsigned integer.
|
||||
|
||||
:return: An unsigned integer that is equivalent to value represented
|
||||
by Python binary string format.
|
||||
"""
|
||||
if not valid_bin(bin_val, width):
|
||||
raise ValueError('not a valid Python binary string: %r!' % (bin_val,))
|
||||
|
||||
return int(bin_val.replace('0b', ''), 2)
|
296
vendor/netaddr/strategy/eui48.py
vendored
Normal file
296
vendor/netaddr/strategy/eui48.py
vendored
Normal file
@ -0,0 +1,296 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
IEEE 48-bit EUI (MAC address) logic.
|
||||
|
||||
Supports numerous MAC string formats including Cisco's triple hextet as well
|
||||
as bare MACs containing no delimiters.
|
||||
"""
|
||||
import struct as _struct
|
||||
import re as _re
|
||||
|
||||
# Check whether we need to use fallback code or not.
|
||||
try:
|
||||
from socket import AF_LINK
|
||||
except ImportError:
|
||||
AF_LINK = 48
|
||||
|
||||
from netaddr.core import AddrFormatError
|
||||
from netaddr.compat import _is_str
|
||||
from netaddr.strategy import (
|
||||
valid_words as _valid_words, int_to_words as _int_to_words,
|
||||
words_to_int as _words_to_int, valid_bits as _valid_bits,
|
||||
bits_to_int as _bits_to_int, int_to_bits as _int_to_bits,
|
||||
valid_bin as _valid_bin, int_to_bin as _int_to_bin,
|
||||
bin_to_int as _bin_to_int)
|
||||
|
||||
#: The width (in bits) of this address type.
|
||||
width = 48
|
||||
|
||||
#: The AF_* constant value of this address type.
|
||||
family = AF_LINK
|
||||
|
||||
#: A friendly string name address type.
|
||||
family_name = 'MAC'
|
||||
|
||||
#: The version of this address type.
|
||||
version = 48
|
||||
|
||||
#: The maximum integer value that can be represented by this address type.
|
||||
max_int = 2 ** width - 1
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
# Dialect classes.
|
||||
#-----------------------------------------------------------------------------
|
||||
|
||||
class mac_eui48(object):
|
||||
"""A standard IEEE EUI-48 dialect class."""
|
||||
#: The individual word size (in bits) of this address type.
|
||||
word_size = 8
|
||||
|
||||
#: The number of words in this address type.
|
||||
num_words = width // word_size
|
||||
|
||||
#: The maximum integer value for an individual word in this address type.
|
||||
max_word = 2 ** word_size - 1
|
||||
|
||||
#: The separator character used between each word.
|
||||
word_sep = '-'
|
||||
|
||||
#: The format string to be used when converting words to string values.
|
||||
word_fmt = '%.2X'
|
||||
|
||||
#: The number base to be used when interpreting word values as integers.
|
||||
word_base = 16
|
||||
|
||||
|
||||
class mac_unix(mac_eui48):
|
||||
"""A UNIX-style MAC address dialect class."""
|
||||
word_size = 8
|
||||
num_words = width // word_size
|
||||
word_sep = ':'
|
||||
word_fmt = '%x'
|
||||
word_base = 16
|
||||
|
||||
|
||||
class mac_unix_expanded(mac_unix):
|
||||
"""A UNIX-style MAC address dialect class with leading zeroes."""
|
||||
word_fmt = '%.2x'
|
||||
|
||||
|
||||
class mac_cisco(mac_eui48):
|
||||
"""A Cisco 'triple hextet' MAC address dialect class."""
|
||||
word_size = 16
|
||||
num_words = width // word_size
|
||||
word_sep = '.'
|
||||
word_fmt = '%.4x'
|
||||
word_base = 16
|
||||
|
||||
|
||||
class mac_bare(mac_eui48):
|
||||
"""A bare (no delimiters) MAC address dialect class."""
|
||||
word_size = 48
|
||||
num_words = width // word_size
|
||||
word_sep = ''
|
||||
word_fmt = '%.12X'
|
||||
word_base = 16
|
||||
|
||||
|
||||
class mac_pgsql(mac_eui48):
|
||||
"""A PostgreSQL style (2 x 24-bit words) MAC address dialect class."""
|
||||
word_size = 24
|
||||
num_words = width // word_size
|
||||
word_sep = ':'
|
||||
word_fmt = '%.6x'
|
||||
word_base = 16
|
||||
|
||||
#: The default dialect to be used when not specified by the user.
|
||||
DEFAULT_DIALECT = mac_eui48
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
#: Regular expressions to match all supported MAC address formats.
|
||||
RE_MAC_FORMATS = (
|
||||
# 2 bytes x 6 (UNIX, Windows, EUI-48)
|
||||
'^' + ':'.join(['([0-9A-F]{1,2})'] * 6) + '$',
|
||||
'^' + '-'.join(['([0-9A-F]{1,2})'] * 6) + '$',
|
||||
|
||||
# 4 bytes x 3 (Cisco)
|
||||
'^' + ':'.join(['([0-9A-F]{1,4})'] * 3) + '$',
|
||||
'^' + '-'.join(['([0-9A-F]{1,4})'] * 3) + '$',
|
||||
'^' + r'\.'.join(['([0-9A-F]{1,4})'] * 3) + '$',
|
||||
|
||||
# 6 bytes x 2 (PostgreSQL)
|
||||
'^' + '-'.join(['([0-9A-F]{5,6})'] * 2) + '$',
|
||||
'^' + ':'.join(['([0-9A-F]{5,6})'] * 2) + '$',
|
||||
|
||||
# 12 bytes (bare, no delimiters)
|
||||
'^(' + ''.join(['[0-9A-F]'] * 12) + ')$',
|
||||
'^(' + ''.join(['[0-9A-F]'] * 11) + ')$',
|
||||
)
|
||||
# For efficiency, each string regexp converted in place to its compiled
|
||||
# counterpart.
|
||||
RE_MAC_FORMATS = [_re.compile(_, _re.IGNORECASE) for _ in RE_MAC_FORMATS]
|
||||
|
||||
|
||||
def valid_str(addr):
|
||||
"""
|
||||
:param addr: An IEEE EUI-48 (MAC) address in string form.
|
||||
|
||||
:return: ``True`` if MAC address string is valid, ``False`` otherwise.
|
||||
"""
|
||||
for regexp in RE_MAC_FORMATS:
|
||||
try:
|
||||
match_result = regexp.findall(addr)
|
||||
if len(match_result) != 0:
|
||||
return True
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def str_to_int(addr):
|
||||
"""
|
||||
:param addr: An IEEE EUI-48 (MAC) address in string form.
|
||||
|
||||
:return: An unsigned integer that is equivalent to value represented
|
||||
by EUI-48/MAC string address formatted according to the dialect
|
||||
settings.
|
||||
"""
|
||||
words = []
|
||||
if _is_str(addr):
|
||||
found_match = False
|
||||
for regexp in RE_MAC_FORMATS:
|
||||
match_result = regexp.findall(addr)
|
||||
if len(match_result) != 0:
|
||||
found_match = True
|
||||
if isinstance(match_result[0], tuple):
|
||||
words = match_result[0]
|
||||
else:
|
||||
words = (match_result[0],)
|
||||
break
|
||||
if not found_match:
|
||||
raise AddrFormatError('%r is not a supported MAC format!' % (addr,))
|
||||
else:
|
||||
raise TypeError('%r is not str() or unicode()!' % (addr,))
|
||||
|
||||
int_val = None
|
||||
|
||||
if len(words) == 6:
|
||||
# 2 bytes x 6 (UNIX, Windows, EUI-48)
|
||||
int_val = int(''.join(['%.2x' % int(w, 16) for w in words]), 16)
|
||||
elif len(words) == 3:
|
||||
# 4 bytes x 3 (Cisco)
|
||||
int_val = int(''.join(['%.4x' % int(w, 16) for w in words]), 16)
|
||||
elif len(words) == 2:
|
||||
# 6 bytes x 2 (PostgreSQL)
|
||||
int_val = int(''.join(['%.6x' % int(w, 16) for w in words]), 16)
|
||||
elif len(words) == 1:
|
||||
# 12 bytes (bare, no delimiters)
|
||||
int_val = int('%012x' % int(words[0], 16), 16)
|
||||
else:
|
||||
raise AddrFormatError('unexpected word count in MAC address %r!' % (addr,))
|
||||
|
||||
return int_val
|
||||
|
||||
|
||||
def int_to_str(int_val, dialect=None):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:param dialect: (optional) a Python class defining formatting options.
|
||||
|
||||
:return: An IEEE EUI-48 (MAC) address string that is equivalent to
|
||||
unsigned integer formatted according to the dialect settings.
|
||||
"""
|
||||
if dialect is None:
|
||||
dialect = mac_eui48
|
||||
|
||||
words = int_to_words(int_val, dialect)
|
||||
tokens = [dialect.word_fmt % i for i in words]
|
||||
addr = dialect.word_sep.join(tokens)
|
||||
|
||||
return addr
|
||||
|
||||
|
||||
def int_to_packed(int_val):
|
||||
"""
|
||||
:param int_val: the integer to be packed.
|
||||
|
||||
:return: a packed string that is equivalent to value represented by an
|
||||
unsigned integer.
|
||||
"""
|
||||
return _struct.pack(">HI", int_val >> 32, int_val & 0xffffffff)
|
||||
|
||||
|
||||
def packed_to_int(packed_int):
|
||||
"""
|
||||
:param packed_int: a packed string containing an unsigned integer.
|
||||
It is assumed that string is packed in network byte order.
|
||||
|
||||
:return: An unsigned integer equivalent to value of network address
|
||||
represented by packed binary string.
|
||||
"""
|
||||
words = list(_struct.unpack('>6B', packed_int))
|
||||
|
||||
int_val = 0
|
||||
for i, num in enumerate(reversed(words)):
|
||||
word = num
|
||||
word = word << 8 * i
|
||||
int_val = int_val | word
|
||||
|
||||
return int_val
|
||||
|
||||
|
||||
def valid_words(words, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _valid_words(words, dialect.word_size, dialect.num_words)
|
||||
|
||||
|
||||
def int_to_words(int_val, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _int_to_words(int_val, dialect.word_size, dialect.num_words)
|
||||
|
||||
|
||||
def words_to_int(words, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _words_to_int(words, dialect.word_size, dialect.num_words)
|
||||
|
||||
|
||||
def valid_bits(bits, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _valid_bits(bits, width, dialect.word_sep)
|
||||
|
||||
|
||||
def bits_to_int(bits, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _bits_to_int(bits, width, dialect.word_sep)
|
||||
|
||||
|
||||
def int_to_bits(int_val, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _int_to_bits(
|
||||
int_val, dialect.word_size, dialect.num_words, dialect.word_sep)
|
||||
|
||||
|
||||
def valid_bin(bin_val, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_DIALECT
|
||||
return _valid_bin(bin_val, width)
|
||||
|
||||
|
||||
def int_to_bin(int_val):
|
||||
return _int_to_bin(int_val, width)
|
||||
|
||||
|
||||
def bin_to_int(bin_val):
|
||||
return _bin_to_int(bin_val, width)
|
273
vendor/netaddr/strategy/eui64.py
vendored
Normal file
273
vendor/netaddr/strategy/eui64.py
vendored
Normal file
@ -0,0 +1,273 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
IEEE 64-bit EUI (Extended Unique Indentifier) logic.
|
||||
"""
|
||||
import struct as _struct
|
||||
import re as _re
|
||||
|
||||
from netaddr.core import AddrFormatError
|
||||
from netaddr.strategy import (
|
||||
valid_words as _valid_words, int_to_words as _int_to_words,
|
||||
words_to_int as _words_to_int, valid_bits as _valid_bits,
|
||||
bits_to_int as _bits_to_int, int_to_bits as _int_to_bits,
|
||||
valid_bin as _valid_bin, int_to_bin as _int_to_bin,
|
||||
bin_to_int as _bin_to_int)
|
||||
|
||||
|
||||
# This is a fake constant that doesn't really exist. Here for completeness.
|
||||
AF_EUI64 = 64
|
||||
|
||||
#: The width (in bits) of this address type.
|
||||
width = 64
|
||||
|
||||
#: The AF_* constant value of this address type.
|
||||
family = AF_EUI64
|
||||
|
||||
#: A friendly string name address type.
|
||||
family_name = 'EUI-64'
|
||||
|
||||
#: The version of this address type.
|
||||
version = 64
|
||||
|
||||
#: The maximum integer value that can be represented by this address type.
|
||||
max_int = 2 ** width - 1
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
# Dialect classes.
|
||||
#-----------------------------------------------------------------------------
|
||||
|
||||
class eui64_base(object):
|
||||
"""A standard IEEE EUI-64 dialect class."""
|
||||
#: The individual word size (in bits) of this address type.
|
||||
word_size = 8
|
||||
|
||||
#: The number of words in this address type.
|
||||
num_words = width // word_size
|
||||
|
||||
#: The maximum integer value for an individual word in this address type.
|
||||
max_word = 2 ** word_size - 1
|
||||
|
||||
#: The separator character used between each word.
|
||||
word_sep = '-'
|
||||
|
||||
#: The format string to be used when converting words to string values.
|
||||
word_fmt = '%.2X'
|
||||
|
||||
#: The number base to be used when interpreting word values as integers.
|
||||
word_base = 16
|
||||
|
||||
|
||||
class eui64_unix(eui64_base):
|
||||
"""A UNIX-style MAC address dialect class."""
|
||||
word_size = 8
|
||||
num_words = width // word_size
|
||||
word_sep = ':'
|
||||
word_fmt = '%x'
|
||||
word_base = 16
|
||||
|
||||
|
||||
class eui64_unix_expanded(eui64_unix):
|
||||
"""A UNIX-style MAC address dialect class with leading zeroes."""
|
||||
word_fmt = '%.2x'
|
||||
|
||||
|
||||
class eui64_cisco(eui64_base):
|
||||
"""A Cisco 'triple hextet' MAC address dialect class."""
|
||||
word_size = 16
|
||||
num_words = width // word_size
|
||||
word_sep = '.'
|
||||
word_fmt = '%.4x'
|
||||
word_base = 16
|
||||
|
||||
|
||||
class eui64_bare(eui64_base):
|
||||
"""A bare (no delimiters) MAC address dialect class."""
|
||||
word_size = 64
|
||||
num_words = width // word_size
|
||||
word_sep = ''
|
||||
word_fmt = '%.16X'
|
||||
word_base = 16
|
||||
|
||||
|
||||
#: The default dialect to be used when not specified by the user.
|
||||
|
||||
DEFAULT_EUI64_DIALECT = eui64_base
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
#: Regular expressions to match all supported MAC address formats.
|
||||
RE_EUI64_FORMATS = (
|
||||
# 2 bytes x 8 (UNIX, Windows, EUI-64)
|
||||
'^' + ':'.join(['([0-9A-F]{1,2})'] * 8) + '$',
|
||||
'^' + '-'.join(['([0-9A-F]{1,2})'] * 8) + '$',
|
||||
|
||||
# 4 bytes x 4 (Cisco like)
|
||||
'^' + ':'.join(['([0-9A-F]{1,4})'] * 4) + '$',
|
||||
'^' + '-'.join(['([0-9A-F]{1,4})'] * 4) + '$',
|
||||
'^' + r'\.'.join(['([0-9A-F]{1,4})'] * 4) + '$',
|
||||
|
||||
# 16 bytes (bare, no delimiters)
|
||||
'^(' + ''.join(['[0-9A-F]'] * 16) + ')$',
|
||||
)
|
||||
# For efficiency, each string regexp converted in place to its compiled
|
||||
# counterpart.
|
||||
RE_EUI64_FORMATS = [_re.compile(_, _re.IGNORECASE) for _ in RE_EUI64_FORMATS]
|
||||
|
||||
|
||||
def _get_match_result(address, formats):
|
||||
for regexp in formats:
|
||||
match = regexp.findall(address)
|
||||
if match:
|
||||
return match[0]
|
||||
|
||||
|
||||
def valid_str(addr):
|
||||
"""
|
||||
:param addr: An IEEE EUI-64 indentifier in string form.
|
||||
|
||||
:return: ``True`` if EUI-64 indentifier is valid, ``False`` otherwise.
|
||||
"""
|
||||
try:
|
||||
if _get_match_result(addr, RE_EUI64_FORMATS):
|
||||
return True
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def str_to_int(addr):
|
||||
"""
|
||||
:param addr: An IEEE EUI-64 indentifier in string form.
|
||||
|
||||
:return: An unsigned integer that is equivalent to value represented
|
||||
by EUI-64 string address formatted according to the dialect
|
||||
"""
|
||||
words = []
|
||||
|
||||
try:
|
||||
words = _get_match_result(addr, RE_EUI64_FORMATS)
|
||||
if not words:
|
||||
raise TypeError
|
||||
except TypeError:
|
||||
raise AddrFormatError('invalid IEEE EUI-64 identifier: %r!' % (addr,))
|
||||
|
||||
if isinstance(words, tuple):
|
||||
pass
|
||||
else:
|
||||
words = (words,)
|
||||
|
||||
if len(words) == 8:
|
||||
# 2 bytes x 8 (UNIX, Windows, EUI-48)
|
||||
int_val = int(''.join(['%.2x' % int(w, 16) for w in words]), 16)
|
||||
elif len(words) == 4:
|
||||
# 4 bytes x 4 (Cisco like)
|
||||
int_val = int(''.join(['%.4x' % int(w, 16) for w in words]), 16)
|
||||
elif len(words) == 1:
|
||||
# 16 bytes (bare, no delimiters)
|
||||
int_val = int('%016x' % int(words[0], 16), 16)
|
||||
else:
|
||||
raise AddrFormatError(
|
||||
'bad word count for EUI-64 identifier: %r!' % addr)
|
||||
|
||||
return int_val
|
||||
|
||||
|
||||
def int_to_str(int_val, dialect=None):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:param dialect: (optional) a Python class defining formatting options
|
||||
|
||||
:return: An IEEE EUI-64 identifier that is equivalent to unsigned integer.
|
||||
"""
|
||||
if dialect is None:
|
||||
dialect = eui64_base
|
||||
words = int_to_words(int_val, dialect)
|
||||
tokens = [dialect.word_fmt % i for i in words]
|
||||
addr = dialect.word_sep.join(tokens)
|
||||
return addr
|
||||
|
||||
|
||||
def int_to_packed(int_val):
|
||||
"""
|
||||
:param int_val: the integer to be packed.
|
||||
|
||||
:return: a packed string that is equivalent to value represented by an
|
||||
unsigned integer.
|
||||
"""
|
||||
words = int_to_words(int_val)
|
||||
return _struct.pack('>8B', *words)
|
||||
|
||||
|
||||
def packed_to_int(packed_int):
|
||||
"""
|
||||
:param packed_int: a packed string containing an unsigned integer.
|
||||
It is assumed that string is packed in network byte order.
|
||||
|
||||
:return: An unsigned integer equivalent to value of network address
|
||||
represented by packed binary string.
|
||||
"""
|
||||
words = list(_struct.unpack('>8B', packed_int))
|
||||
|
||||
int_val = 0
|
||||
for i, num in enumerate(reversed(words)):
|
||||
word = num
|
||||
word = word << 8 * i
|
||||
int_val = int_val | word
|
||||
|
||||
return int_val
|
||||
|
||||
|
||||
def valid_words(words, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _valid_words(words, dialect.word_size, dialect.num_words)
|
||||
|
||||
|
||||
def int_to_words(int_val, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _int_to_words(int_val, dialect.word_size, dialect.num_words)
|
||||
|
||||
|
||||
def words_to_int(words, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _words_to_int(words, dialect.word_size, dialect.num_words)
|
||||
|
||||
|
||||
def valid_bits(bits, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _valid_bits(bits, width, dialect.word_sep)
|
||||
|
||||
|
||||
def bits_to_int(bits, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _bits_to_int(bits, width, dialect.word_sep)
|
||||
|
||||
|
||||
def int_to_bits(int_val, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _int_to_bits(
|
||||
int_val, dialect.word_size, dialect.num_words, dialect.word_sep)
|
||||
|
||||
|
||||
def valid_bin(bin_val, dialect=None):
|
||||
if dialect is None:
|
||||
dialect = DEFAULT_EUI64_DIALECT
|
||||
return _valid_bin(bin_val, width)
|
||||
|
||||
|
||||
def int_to_bin(int_val):
|
||||
return _int_to_bin(int_val, width)
|
||||
|
||||
|
||||
def bin_to_int(bin_val):
|
||||
return _bin_to_int(bin_val, width)
|
279
vendor/netaddr/strategy/ipv4.py
vendored
Normal file
279
vendor/netaddr/strategy/ipv4.py
vendored
Normal file
@ -0,0 +1,279 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""IPv4 address logic."""
|
||||
|
||||
import sys as _sys
|
||||
import struct as _struct
|
||||
|
||||
from socket import inet_aton as _inet_aton
|
||||
# Check whether we need to use fallback code or not.
|
||||
if _sys.platform in ('win32', 'cygwin'):
|
||||
# inet_pton() not available on Windows. inet_pton() under cygwin
|
||||
# behaves exactly like inet_aton() and is therefore highly unreliable.
|
||||
from netaddr.fbsocket import inet_pton as _inet_pton, AF_INET
|
||||
else:
|
||||
# All other cases, use all functions from the socket module.
|
||||
from socket import inet_pton as _inet_pton, AF_INET
|
||||
|
||||
from netaddr.core import AddrFormatError, ZEROFILL, INET_PTON
|
||||
|
||||
from netaddr.strategy import (
|
||||
valid_words as _valid_words, valid_bits as _valid_bits,
|
||||
bits_to_int as _bits_to_int, int_to_bits as _int_to_bits,
|
||||
valid_bin as _valid_bin, int_to_bin as _int_to_bin,
|
||||
bin_to_int as _bin_to_int)
|
||||
|
||||
from netaddr.compat import _str_type
|
||||
|
||||
#: The width (in bits) of this address type.
|
||||
width = 32
|
||||
|
||||
#: The individual word size (in bits) of this address type.
|
||||
word_size = 8
|
||||
|
||||
#: The format string to be used when converting words to string values.
|
||||
word_fmt = '%d'
|
||||
|
||||
#: The separator character used between each word.
|
||||
word_sep = '.'
|
||||
|
||||
#: The AF_* constant value of this address type.
|
||||
family = AF_INET
|
||||
|
||||
#: A friendly string name address type.
|
||||
family_name = 'IPv4'
|
||||
|
||||
#: The version of this address type.
|
||||
version = 4
|
||||
|
||||
#: The number base to be used when interpreting word values as integers.
|
||||
word_base = 10
|
||||
|
||||
#: The maximum integer value that can be represented by this address type.
|
||||
max_int = 2 ** width - 1
|
||||
|
||||
#: The number of words in this address type.
|
||||
num_words = width // word_size
|
||||
|
||||
#: The maximum integer value for an individual word in this address type.
|
||||
max_word = 2 ** word_size - 1
|
||||
|
||||
#: A dictionary mapping IPv4 CIDR prefixes to the equivalent netmasks.
|
||||
prefix_to_netmask = dict(
|
||||
[(i, max_int ^ (2 ** (width - i) - 1)) for i in range(0, width + 1)])
|
||||
|
||||
#: A dictionary mapping IPv4 netmasks to their equivalent CIDR prefixes.
|
||||
netmask_to_prefix = dict(
|
||||
[(max_int ^ (2 ** (width - i) - 1), i) for i in range(0, width + 1)])
|
||||
|
||||
#: A dictionary mapping IPv4 CIDR prefixes to the equivalent hostmasks.
|
||||
prefix_to_hostmask = dict(
|
||||
[(i, (2 ** (width - i) - 1)) for i in range(0, width + 1)])
|
||||
|
||||
#: A dictionary mapping IPv4 hostmasks to their equivalent CIDR prefixes.
|
||||
hostmask_to_prefix = dict(
|
||||
[((2 ** (width - i) - 1), i) for i in range(0, width + 1)])
|
||||
|
||||
|
||||
def valid_str(addr, flags=0):
|
||||
"""
|
||||
:param addr: An IPv4 address in presentation (string) format.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation of the
|
||||
addr value. Supported constants are INET_PTON and ZEROFILL. See the
|
||||
netaddr.core docs for details.
|
||||
|
||||
:return: ``True`` if IPv4 address is valid, ``False`` otherwise.
|
||||
"""
|
||||
if addr == '':
|
||||
raise AddrFormatError('Empty strings are not supported!')
|
||||
|
||||
validity = True
|
||||
|
||||
if flags & ZEROFILL:
|
||||
addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
|
||||
|
||||
try:
|
||||
if flags & INET_PTON:
|
||||
_inet_pton(AF_INET, addr)
|
||||
else:
|
||||
_inet_aton(addr)
|
||||
except Exception:
|
||||
validity = False
|
||||
|
||||
return validity
|
||||
|
||||
|
||||
def str_to_int(addr, flags=0):
|
||||
"""
|
||||
:param addr: An IPv4 dotted decimal address in string form.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation of the
|
||||
addr value. Supported constants are INET_PTON and ZEROFILL. See the
|
||||
netaddr.core docs for details.
|
||||
|
||||
:return: The equivalent unsigned integer for a given IPv4 address.
|
||||
"""
|
||||
if flags & ZEROFILL:
|
||||
addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
|
||||
|
||||
try:
|
||||
if flags & INET_PTON:
|
||||
return _struct.unpack('>I', _inet_pton(AF_INET, addr))[0]
|
||||
else:
|
||||
return _struct.unpack('>I', _inet_aton(addr))[0]
|
||||
except Exception:
|
||||
raise AddrFormatError('%r is not a valid IPv4 address string!' % (addr,))
|
||||
|
||||
|
||||
def int_to_str(int_val, dialect=None):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:param dialect: (unused) Any value passed in is ignored.
|
||||
|
||||
:return: The IPv4 presentation (string) format address equivalent to the
|
||||
unsigned integer provided.
|
||||
"""
|
||||
if 0 <= int_val <= max_int:
|
||||
return '%d.%d.%d.%d' % (
|
||||
int_val >> 24,
|
||||
(int_val >> 16) & 0xff,
|
||||
(int_val >> 8) & 0xff,
|
||||
int_val & 0xff)
|
||||
else:
|
||||
raise ValueError('%r is not a valid 32-bit unsigned integer!' % (int_val,))
|
||||
|
||||
|
||||
def int_to_arpa(int_val):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:return: The reverse DNS lookup for an IPv4 address in network byte
|
||||
order integer form.
|
||||
"""
|
||||
words = ["%d" % i for i in int_to_words(int_val)]
|
||||
words.reverse()
|
||||
words.extend(['in-addr', 'arpa', ''])
|
||||
return '.'.join(words)
|
||||
|
||||
|
||||
def int_to_packed(int_val):
|
||||
"""
|
||||
:param int_val: the integer to be packed.
|
||||
|
||||
:return: a packed string that is equivalent to value represented by an
|
||||
unsigned integer.
|
||||
"""
|
||||
return _struct.pack('>I', int_val)
|
||||
|
||||
|
||||
def packed_to_int(packed_int):
|
||||
"""
|
||||
:param packed_int: a packed string containing an unsigned integer.
|
||||
It is assumed that string is packed in network byte order.
|
||||
|
||||
:return: An unsigned integer equivalent to value of network address
|
||||
represented by packed binary string.
|
||||
"""
|
||||
return _struct.unpack('>I', packed_int)[0]
|
||||
|
||||
|
||||
def valid_words(words):
|
||||
return _valid_words(words, word_size, num_words)
|
||||
|
||||
|
||||
def int_to_words(int_val):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:return: An integer word (octet) sequence that is equivalent to value
|
||||
represented by an unsigned integer.
|
||||
"""
|
||||
if not 0 <= int_val <= max_int:
|
||||
raise ValueError('%r is not a valid integer value supported by'
|
||||
'this address type!' % (int_val,))
|
||||
return ( int_val >> 24,
|
||||
(int_val >> 16) & 0xff,
|
||||
(int_val >> 8) & 0xff,
|
||||
int_val & 0xff)
|
||||
|
||||
|
||||
def words_to_int(words):
|
||||
"""
|
||||
:param words: A list or tuple containing integer octets.
|
||||
|
||||
:return: An unsigned integer that is equivalent to value represented
|
||||
by word (octet) sequence.
|
||||
"""
|
||||
if not valid_words(words):
|
||||
raise ValueError('%r is not a valid octet list for an IPv4 address!' % (words,))
|
||||
return _struct.unpack('>I', _struct.pack('4B', *words))[0]
|
||||
|
||||
|
||||
def valid_bits(bits):
|
||||
return _valid_bits(bits, width, word_sep)
|
||||
|
||||
|
||||
def bits_to_int(bits):
|
||||
return _bits_to_int(bits, width, word_sep)
|
||||
|
||||
|
||||
def int_to_bits(int_val, word_sep=None):
|
||||
if word_sep is None:
|
||||
word_sep = globals()['word_sep']
|
||||
return _int_to_bits(int_val, word_size, num_words, word_sep)
|
||||
|
||||
|
||||
def valid_bin(bin_val):
|
||||
return _valid_bin(bin_val, width)
|
||||
|
||||
|
||||
def int_to_bin(int_val):
|
||||
return _int_to_bin(int_val, width)
|
||||
|
||||
|
||||
def bin_to_int(bin_val):
|
||||
return _bin_to_int(bin_val, width)
|
||||
|
||||
|
||||
def expand_partial_address(addr):
|
||||
"""
|
||||
Expands a partial IPv4 address into a full 4-octet version.
|
||||
|
||||
:param addr: an partial or abbreviated IPv4 address
|
||||
|
||||
:return: an expanded IP address in presentation format (x.x.x.x)
|
||||
|
||||
"""
|
||||
tokens = []
|
||||
|
||||
error = AddrFormatError('invalid partial IPv4 address: %r!' % addr)
|
||||
|
||||
if isinstance(addr, _str_type):
|
||||
if ':' in addr:
|
||||
# Ignore IPv6 ...
|
||||
raise error
|
||||
|
||||
try:
|
||||
if '.' in addr:
|
||||
tokens = ['%d' % int(o) for o in addr.split('.')]
|
||||
else:
|
||||
tokens = ['%d' % int(addr)]
|
||||
except ValueError:
|
||||
raise error
|
||||
|
||||
if 1 <= len(tokens) <= 4:
|
||||
for i in range(4 - len(tokens)):
|
||||
tokens.append('0')
|
||||
else:
|
||||
raise error
|
||||
|
||||
if not tokens:
|
||||
raise error
|
||||
|
||||
return '%s.%s.%s.%s' % tuple(tokens)
|
||||
|
259
vendor/netaddr/strategy/ipv6.py
vendored
Normal file
259
vendor/netaddr/strategy/ipv6.py
vendored
Normal file
@ -0,0 +1,259 @@
|
||||
#-----------------------------------------------------------------------------
|
||||
# Copyright (c) 2008 by David P. D. Moss. All rights reserved.
|
||||
#
|
||||
# Released under the BSD license. See the LICENSE file for details.
|
||||
#-----------------------------------------------------------------------------
|
||||
"""
|
||||
IPv6 address logic.
|
||||
"""
|
||||
import struct as _struct
|
||||
|
||||
OPT_IMPORTS = False
|
||||
|
||||
# Check whether we need to use fallback code or not.
|
||||
try:
|
||||
import socket as _socket
|
||||
# These might all generate exceptions on different platforms.
|
||||
if not _socket.has_ipv6:
|
||||
raise Exception('IPv6 disabled')
|
||||
_socket.inet_pton
|
||||
_socket.AF_INET6
|
||||
from _socket import (inet_pton as _inet_pton, inet_ntop as _inet_ntop,
|
||||
AF_INET6)
|
||||
OPT_IMPORTS = True
|
||||
except Exception:
|
||||
from netaddr.fbsocket import (inet_pton as _inet_pton, inet_ntop as _inet_ntop,
|
||||
AF_INET6)
|
||||
|
||||
from netaddr.core import AddrFormatError
|
||||
from netaddr.strategy import (
|
||||
valid_words as _valid_words, int_to_words as _int_to_words,
|
||||
words_to_int as _words_to_int, valid_bits as _valid_bits,
|
||||
bits_to_int as _bits_to_int, int_to_bits as _int_to_bits,
|
||||
valid_bin as _valid_bin, int_to_bin as _int_to_bin,
|
||||
bin_to_int as _bin_to_int)
|
||||
|
||||
#: The width (in bits) of this address type.
|
||||
width = 128
|
||||
|
||||
#: The individual word size (in bits) of this address type.
|
||||
word_size = 16
|
||||
|
||||
#: The separator character used between each word.
|
||||
word_sep = ':'
|
||||
|
||||
#: The AF_* constant value of this address type.
|
||||
family = AF_INET6
|
||||
|
||||
#: A friendly string name address type.
|
||||
family_name = 'IPv6'
|
||||
|
||||
#: The version of this address type.
|
||||
version = 6
|
||||
|
||||
#: The number base to be used when interpreting word values as integers.
|
||||
word_base = 16
|
||||
|
||||
#: The maximum integer value that can be represented by this address type.
|
||||
max_int = 2 ** width - 1
|
||||
|
||||
#: The number of words in this address type.
|
||||
num_words = width // word_size
|
||||
|
||||
#: The maximum integer value for an individual word in this address type.
|
||||
max_word = 2 ** word_size - 1
|
||||
|
||||
#: A dictionary mapping IPv6 CIDR prefixes to the equivalent netmasks.
|
||||
prefix_to_netmask = dict(
|
||||
[(i, max_int ^ (2 ** (width - i) - 1)) for i in range(0, width+1)])
|
||||
|
||||
#: A dictionary mapping IPv6 netmasks to their equivalent CIDR prefixes.
|
||||
netmask_to_prefix = dict(
|
||||
[(max_int ^ (2 ** (width - i) - 1), i) for i in range(0, width+1)])
|
||||
|
||||
#: A dictionary mapping IPv6 CIDR prefixes to the equivalent hostmasks.
|
||||
prefix_to_hostmask = dict(
|
||||
[(i, (2 ** (width - i) - 1)) for i in range(0, width+1)])
|
||||
|
||||
#: A dictionary mapping IPv6 hostmasks to their equivalent CIDR prefixes.
|
||||
hostmask_to_prefix = dict(
|
||||
[((2 ** (width - i) - 1), i) for i in range(0, width+1)])
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
# Dialect classes.
|
||||
#-----------------------------------------------------------------------------
|
||||
|
||||
class ipv6_compact(object):
|
||||
"""An IPv6 dialect class - compact form."""
|
||||
#: The format string used to converting words into string values.
|
||||
word_fmt = '%x'
|
||||
|
||||
#: Boolean flag indicating if IPv6 compaction algorithm should be used.
|
||||
compact = True
|
||||
|
||||
class ipv6_full(ipv6_compact):
|
||||
"""An IPv6 dialect class - 'all zeroes' form."""
|
||||
|
||||
#: Boolean flag indicating if IPv6 compaction algorithm should be used.
|
||||
compact = False
|
||||
|
||||
class ipv6_verbose(ipv6_compact):
|
||||
"""An IPv6 dialect class - extra wide 'all zeroes' form."""
|
||||
|
||||
#: The format string used to converting words into string values.
|
||||
word_fmt = '%.4x'
|
||||
|
||||
#: Boolean flag indicating if IPv6 compaction algorithm should be used.
|
||||
compact = False
|
||||
|
||||
|
||||
def valid_str(addr, flags=0):
|
||||
"""
|
||||
:param addr: An IPv6 address in presentation (string) format.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation of the
|
||||
addr value. Future use - currently has no effect.
|
||||
|
||||
:return: ``True`` if IPv6 address is valid, ``False`` otherwise.
|
||||
"""
|
||||
if addr == '':
|
||||
raise AddrFormatError('Empty strings are not supported!')
|
||||
|
||||
try:
|
||||
_inet_pton(AF_INET6, addr)
|
||||
except:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def str_to_int(addr, flags=0):
|
||||
"""
|
||||
:param addr: An IPv6 address in string form.
|
||||
|
||||
:param flags: decides which rules are applied to the interpretation of the
|
||||
addr value. Future use - currently has no effect.
|
||||
|
||||
:return: The equivalent unsigned integer for a given IPv6 address.
|
||||
"""
|
||||
try:
|
||||
packed_int = _inet_pton(AF_INET6, addr)
|
||||
return packed_to_int(packed_int)
|
||||
except Exception:
|
||||
raise AddrFormatError('%r is not a valid IPv6 address string!' % (addr,))
|
||||
|
||||
|
||||
def int_to_str(int_val, dialect=None):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:param dialect: (optional) a Python class defining formatting options.
|
||||
|
||||
:return: The IPv6 presentation (string) format address equivalent to the
|
||||
unsigned integer provided.
|
||||
"""
|
||||
if dialect is None:
|
||||
dialect = ipv6_compact
|
||||
|
||||
addr = None
|
||||
|
||||
try:
|
||||
packed_int = int_to_packed(int_val)
|
||||
if dialect.compact:
|
||||
# Default return value.
|
||||
addr = _inet_ntop(AF_INET6, packed_int)
|
||||
else:
|
||||
# Custom return value.
|
||||
words = list(_struct.unpack('>8H', packed_int))
|
||||
tokens = [dialect.word_fmt % word for word in words]
|
||||
addr = word_sep.join(tokens)
|
||||
except Exception:
|
||||
raise ValueError('%r is not a valid 128-bit unsigned integer!' % (int_val,))
|
||||
|
||||
return addr
|
||||
|
||||
|
||||
def int_to_arpa(int_val):
|
||||
"""
|
||||
:param int_val: An unsigned integer.
|
||||
|
||||
:return: The reverse DNS lookup for an IPv6 address in network byte
|
||||
order integer form.
|
||||
"""
|
||||
addr = int_to_str(int_val, ipv6_verbose)
|
||||
tokens = list(addr.replace(':', ''))
|
||||
tokens.reverse()
|
||||
# We won't support ip6.int here - see RFC 3152 for details.
|
||||
tokens = tokens + ['ip6', 'arpa', '']
|
||||
return '.'.join(tokens)
|
||||
|
||||
|
||||
def int_to_packed(int_val):
|
||||
"""
|
||||
:param int_val: the integer to be packed.
|
||||
|
||||
:return: a packed string that is equivalent to value represented by an
|
||||
unsigned integer.
|
||||
"""
|
||||
words = int_to_words(int_val, 4, 32)
|
||||
return _struct.pack('>4I', *words)
|
||||
|
||||
|
||||
def packed_to_int(packed_int):
|
||||
"""
|
||||
:param packed_int: a packed string containing an unsigned integer.
|
||||
It is assumed that string is packed in network byte order.
|
||||
|
||||
:return: An unsigned integer equivalent to value of network address
|
||||
represented by packed binary string.
|
||||
"""
|
||||
words = list(_struct.unpack('>4I', packed_int))
|
||||
|
||||
int_val = 0
|
||||
for i, num in enumerate(reversed(words)):
|
||||
word = num
|
||||
word = word << 32 * i
|
||||
int_val = int_val | word
|
||||
|
||||
return int_val
|
||||
|
||||
|
||||
def valid_words(words):
|
||||
return _valid_words(words, word_size, num_words)
|
||||
|
||||
|
||||
def int_to_words(int_val, num_words=None, word_size=None):
|
||||
if num_words is None:
|
||||
num_words = globals()['num_words']
|
||||
if word_size is None:
|
||||
word_size = globals()['word_size']
|
||||
return _int_to_words(int_val, word_size, num_words)
|
||||
|
||||
|
||||
def words_to_int(words):
|
||||
return _words_to_int(words, word_size, num_words)
|
||||
|
||||
|
||||
def valid_bits(bits):
|
||||
return _valid_bits(bits, width, word_sep)
|
||||
|
||||
|
||||
def bits_to_int(bits):
|
||||
return _bits_to_int(bits, width, word_sep)
|
||||
|
||||
|
||||
def int_to_bits(int_val, word_sep=None):
|
||||
if word_sep is None:
|
||||
word_sep = globals()['word_sep']
|
||||
return _int_to_bits(int_val, word_size, num_words, word_sep)
|
||||
|
||||
|
||||
def valid_bin(bin_val):
|
||||
return _valid_bin(bin_val, width)
|
||||
|
||||
|
||||
def int_to_bin(int_val):
|
||||
return _int_to_bin(int_val, width)
|
||||
|
||||
|
||||
def bin_to_int(bin_val):
|
||||
return _bin_to_int(bin_val, width)
|
1
vendor/pathlib2-2.3.7.post1.dist-info/INSTALLER
vendored
Normal file
1
vendor/pathlib2-2.3.7.post1.dist-info/INSTALLER
vendored
Normal file
@ -0,0 +1 @@
|
||||
pip
|
23
vendor/pathlib2-2.3.7.post1.dist-info/LICENSE.rst
vendored
Normal file
23
vendor/pathlib2-2.3.7.post1.dist-info/LICENSE.rst
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014-2017 Matthias C. M. Troffaes
|
||||
Copyright (c) 2012-2014 Antoine Pitrou and contributors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
88
vendor/pathlib2-2.3.7.post1.dist-info/METADATA
vendored
Normal file
88
vendor/pathlib2-2.3.7.post1.dist-info/METADATA
vendored
Normal file
@ -0,0 +1,88 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: pathlib2
|
||||
Version: 2.3.7.post1
|
||||
Summary: Object-oriented filesystem paths
|
||||
Home-page: https://github.com/jazzband/pathlib2
|
||||
Author: Matthias C. M. Troffaes
|
||||
Author-email: matthias.troffaes@gmail.com
|
||||
License: MIT
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: 3.9
|
||||
Classifier: Topic :: Software Development :: Libraries
|
||||
Classifier: Topic :: System :: Filesystems
|
||||
Requires-Dist: six
|
||||
Requires-Dist: scandir ; python_version<"3.5"
|
||||
Requires-Dist: typing ; python_version<"3.5"
|
||||
|
||||
The `old pathlib <https://web.archive.org/web/20181106215056/https://bitbucket.org/pitrou/pathlib/>`_
|
||||
module on bitbucket is no longer maintained.
|
||||
The goal of pathlib2 is to provide a backport of
|
||||
`standard pathlib <http://docs.python.org/dev/library/pathlib.html>`_
|
||||
module which tracks the standard library module,
|
||||
so all the newest features of the standard pathlib can be
|
||||
used also on older Python versions.
|
||||
|
||||
Download
|
||||
--------
|
||||
|
||||
Standalone releases are available on PyPI:
|
||||
http://pypi.python.org/pypi/pathlib2/
|
||||
|
||||
Development
|
||||
-----------
|
||||
|
||||
The main development takes place in the Python standard library: see
|
||||
the `Python developer's guide <http://docs.python.org/devguide/>`_.
|
||||
In particular, new features should be submitted to the
|
||||
`Python bug tracker <http://bugs.python.org/>`_.
|
||||
|
||||
Issues that occur in this backport, but that do not occur not in the
|
||||
standard Python pathlib module can be submitted on
|
||||
the `pathlib2 bug tracker <https://github.com/jazzband/pathlib2/issues>`_.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
Refer to the
|
||||
`standard pathlib <http://docs.python.org/dev/library/pathlib.html>`_
|
||||
documentation.
|
||||
|
||||
Known Issues
|
||||
------------
|
||||
|
||||
For historic reasons, pathlib2 still uses bytes to represent file paths internally.
|
||||
Unfortunately, on Windows with Python 2.7, the file system encoder (``mcbs``)
|
||||
has only poor support for non-ascii characters,
|
||||
and can silently replace non-ascii characters without warning.
|
||||
For example, ``u'тест'.encode(sys.getfilesystemencoding())`` results in ``????``
|
||||
which is obviously completely useless.
|
||||
|
||||
Therefore, on Windows with Python 2.7, until this problem is fixed upstream,
|
||||
unfortunately you cannot rely on pathlib2 to support the full unicode range for filenames.
|
||||
See `issue #56 <https://github.com/jazzband/pathlib2/issues/56>`_ for more details.
|
||||
|
||||
.. |github| image:: https://github.com/jazzband/pathlib2/actions/workflows/python-package.yml/badge.svg
|
||||
:target: https://github.com/jazzband/pathlib2/actions/workflows/python-package.yml
|
||||
:alt: github
|
||||
|
||||
.. |codecov| image:: https://codecov.io/gh/jazzband/pathlib2/branch/develop/graph/badge.svg
|
||||
:target: https://codecov.io/gh/jazzband/pathlib2
|
||||
:alt: codecov
|
||||
|
||||
.. |jazzband| image:: https://jazzband.co/static/img/badge.svg
|
||||
:alt: Jazzband
|
||||
:target: https://jazzband.co/
|
||||
|
||||
|
8
vendor/pathlib2-2.3.7.post1.dist-info/RECORD
vendored
Normal file
8
vendor/pathlib2-2.3.7.post1.dist-info/RECORD
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
pathlib2-2.3.7.post1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
pathlib2-2.3.7.post1.dist-info/LICENSE.rst,sha256=hh-BMAShUax3AkrURXlGU4Cd34p1cq7nurGNEd8rocY,1175
|
||||
pathlib2-2.3.7.post1.dist-info/METADATA,sha256=xIYAYCXA8QLbMGx9qUBSrxFpCtjGShQ8kIGKt_N-_1Y,3454
|
||||
pathlib2-2.3.7.post1.dist-info/RECORD,,
|
||||
pathlib2-2.3.7.post1.dist-info/WHEEL,sha256=Z-nyYpwrcSqxfdux5Mbn_DQ525iP7J2DG3JgGvOYyTQ,110
|
||||
pathlib2-2.3.7.post1.dist-info/top_level.txt,sha256=tNPkisFiGBFsPUnCIHg62vSFlkx_1NO86Id8lbJmfFQ,9
|
||||
pathlib2/__init__.py,sha256=5Nd6Qc5fI42nX0npJVI-5Nwn1lKlmqBxyHQoj_wpCQM,63938
|
||||
pathlib2/__init__.pyc,,
|
6
vendor/pathlib2-2.3.7.post1.dist-info/WHEEL
vendored
Normal file
6
vendor/pathlib2-2.3.7.post1.dist-info/WHEEL
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.36.2)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
1
vendor/pathlib2-2.3.7.post1.dist-info/top_level.txt
vendored
Normal file
1
vendor/pathlib2-2.3.7.post1.dist-info/top_level.txt
vendored
Normal file
@ -0,0 +1 @@
|
||||
pathlib2
|
1896
vendor/pathlib2/__init__.py
vendored
Normal file
1896
vendor/pathlib2/__init__.py
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
vendor/scandir-1.10.0.dist-info/INSTALLER
vendored
Normal file
1
vendor/scandir-1.10.0.dist-info/INSTALLER
vendored
Normal file
@ -0,0 +1 @@
|
||||
pip
|
27
vendor/scandir-1.10.0.dist-info/LICENSE.txt
vendored
Normal file
27
vendor/scandir-1.10.0.dist-info/LICENSE.txt
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
Copyright (c) 2012, Ben Hoyt
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
* Neither the name of Ben Hoyt nor the names of its contributors may be used
|
||||
to endorse or promote products derived from this software without specific
|
||||
prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
237
vendor/scandir-1.10.0.dist-info/METADATA
vendored
Normal file
237
vendor/scandir-1.10.0.dist-info/METADATA
vendored
Normal file
@ -0,0 +1,237 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: scandir
|
||||
Version: 1.10.0
|
||||
Summary: scandir, a better directory iterator and faster os.walk()
|
||||
Home-page: https://github.com/benhoyt/scandir
|
||||
Author: Ben Hoyt
|
||||
Author-email: benhoyt@gmail.com
|
||||
License: New BSD License
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: License :: OSI Approved :: BSD License
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Topic :: System :: Filesystems
|
||||
Classifier: Topic :: System :: Operating System
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.4
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
|
||||
scandir, a better directory iterator and faster os.walk()
|
||||
=========================================================
|
||||
|
||||
.. image:: https://img.shields.io/pypi/v/scandir.svg
|
||||
:target: https://pypi.python.org/pypi/scandir
|
||||
:alt: scandir on PyPI (Python Package Index)
|
||||
|
||||
.. image:: https://travis-ci.org/benhoyt/scandir.svg?branch=master
|
||||
:target: https://travis-ci.org/benhoyt/scandir
|
||||
:alt: Travis CI tests (Linux)
|
||||
|
||||
.. image:: https://ci.appveyor.com/api/projects/status/github/benhoyt/scandir?branch=master&svg=true
|
||||
:target: https://ci.appveyor.com/project/benhoyt/scandir
|
||||
:alt: Appveyor tests (Windows)
|
||||
|
||||
|
||||
``scandir()`` is a directory iteration function like ``os.listdir()``,
|
||||
except that instead of returning a list of bare filenames, it yields
|
||||
``DirEntry`` objects that include file type and stat information along
|
||||
with the name. Using ``scandir()`` increases the speed of ``os.walk()``
|
||||
by 2-20 times (depending on the platform and file system) by avoiding
|
||||
unnecessary calls to ``os.stat()`` in most cases.
|
||||
|
||||
|
||||
Now included in a Python near you!
|
||||
----------------------------------
|
||||
|
||||
``scandir`` has been included in the Python 3.5 standard library as
|
||||
``os.scandir()``, and the related performance improvements to
|
||||
``os.walk()`` have also been included. So if you're lucky enough to be
|
||||
using Python 3.5 (release date September 13, 2015) you get the benefit
|
||||
immediately, otherwise just
|
||||
`download this module from PyPI <https://pypi.python.org/pypi/scandir>`_,
|
||||
install it with ``pip install scandir``, and then do something like
|
||||
this in your code:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# Use the built-in version of scandir/walk if possible, otherwise
|
||||
# use the scandir module version
|
||||
try:
|
||||
from os import scandir, walk
|
||||
except ImportError:
|
||||
from scandir import scandir, walk
|
||||
|
||||
`PEP 471 <https://www.python.org/dev/peps/pep-0471/>`_, which is the
|
||||
PEP that proposes including ``scandir`` in the Python standard library,
|
||||
was `accepted <https://mail.python.org/pipermail/python-dev/2014-July/135561.html>`_
|
||||
in July 2014 by Victor Stinner, the BDFL-delegate for the PEP.
|
||||
|
||||
This ``scandir`` module is intended to work on Python 2.7+ and Python
|
||||
3.4+ (and it has been tested on those versions).
|
||||
|
||||
|
||||
Background
|
||||
----------
|
||||
|
||||
Python's built-in ``os.walk()`` is significantly slower than it needs to be,
|
||||
because -- in addition to calling ``listdir()`` on each directory -- it calls
|
||||
``stat()`` on each file to determine whether the filename is a directory or not.
|
||||
But both ``FindFirstFile`` / ``FindNextFile`` on Windows and ``readdir`` on Linux/OS
|
||||
X already tell you whether the files returned are directories or not, so
|
||||
no further ``stat`` system calls are needed. In short, you can reduce the number
|
||||
of system calls from about 2N to N, where N is the total number of files and
|
||||
directories in the tree.
|
||||
|
||||
In practice, removing all those extra system calls makes ``os.walk()`` about
|
||||
**7-50 times as fast on Windows, and about 3-10 times as fast on Linux and Mac OS
|
||||
X.** So we're not talking about micro-optimizations. See more benchmarks
|
||||
in the "Benchmarks" section below.
|
||||
|
||||
Somewhat relatedly, many people have also asked for a version of
|
||||
``os.listdir()`` that yields filenames as it iterates instead of returning them
|
||||
as one big list. This improves memory efficiency for iterating very large
|
||||
directories.
|
||||
|
||||
So as well as a faster ``walk()``, scandir adds a new ``scandir()`` function.
|
||||
They're pretty easy to use, but see "The API" below for the full docs.
|
||||
|
||||
|
||||
Benchmarks
|
||||
----------
|
||||
|
||||
Below are results showing how many times as fast ``scandir.walk()`` is than
|
||||
``os.walk()`` on various systems, found by running ``benchmark.py`` with no
|
||||
arguments:
|
||||
|
||||
==================== ============== =============
|
||||
System version Python version Times as fast
|
||||
==================== ============== =============
|
||||
Windows 7 64-bit 2.7.7 64-bit 10.4
|
||||
Windows 7 64-bit SSD 2.7.7 64-bit 10.3
|
||||
Windows 7 64-bit NFS 2.7.6 64-bit 36.8
|
||||
Windows 7 64-bit SSD 3.4.1 64-bit 9.9
|
||||
Windows 7 64-bit SSD 3.5.0 64-bit 9.5
|
||||
Ubuntu 14.04 64-bit 2.7.6 64-bit 5.8
|
||||
Mac OS X 10.9.3 2.7.5 64-bit 3.8
|
||||
==================== ============== =============
|
||||
|
||||
All of the above tests were done using the fast C version of scandir
|
||||
(source code in ``_scandir.c``).
|
||||
|
||||
Note that the gains are less than the above on smaller directories and greater
|
||||
on larger directories. This is why ``benchmark.py`` creates a test directory
|
||||
tree with a standardized size.
|
||||
|
||||
|
||||
The API
|
||||
-------
|
||||
|
||||
walk()
|
||||
~~~~~~
|
||||
|
||||
The API for ``scandir.walk()`` is exactly the same as ``os.walk()``, so just
|
||||
`read the Python docs <https://docs.python.org/3.5/library/os.html#os.walk>`_.
|
||||
|
||||
scandir()
|
||||
~~~~~~~~~
|
||||
|
||||
The full docs for ``scandir()`` and the ``DirEntry`` objects it yields are
|
||||
available in the `Python documentation here <https://docs.python.org/3.5/library/os.html#os.scandir>`_.
|
||||
But below is a brief summary as well.
|
||||
|
||||
scandir(path='.') -> iterator of DirEntry objects for given path
|
||||
|
||||
Like ``listdir``, ``scandir`` calls the operating system's directory
|
||||
iteration system calls to get the names of the files in the given
|
||||
``path``, but it's different from ``listdir`` in two ways:
|
||||
|
||||
* Instead of returning bare filename strings, it returns lightweight
|
||||
``DirEntry`` objects that hold the filename string and provide
|
||||
simple methods that allow access to the additional data the
|
||||
operating system may have returned.
|
||||
|
||||
* It returns a generator instead of a list, so that ``scandir`` acts
|
||||
as a true iterator instead of returning the full list immediately.
|
||||
|
||||
``scandir()`` yields a ``DirEntry`` object for each file and
|
||||
sub-directory in ``path``. Just like ``listdir``, the ``'.'``
|
||||
and ``'..'`` pseudo-directories are skipped, and the entries are
|
||||
yielded in system-dependent order. Each ``DirEntry`` object has the
|
||||
following attributes and methods:
|
||||
|
||||
* ``name``: the entry's filename, relative to the scandir ``path``
|
||||
argument (corresponds to the return values of ``os.listdir``)
|
||||
|
||||
* ``path``: the entry's full path name (not necessarily an absolute
|
||||
path) -- the equivalent of ``os.path.join(scandir_path, entry.name)``
|
||||
|
||||
* ``is_dir(*, follow_symlinks=True)``: similar to
|
||||
``pathlib.Path.is_dir()``, but the return value is cached on the
|
||||
``DirEntry`` object; doesn't require a system call in most cases;
|
||||
don't follow symbolic links if ``follow_symlinks`` is False
|
||||
|
||||
* ``is_file(*, follow_symlinks=True)``: similar to
|
||||
``pathlib.Path.is_file()``, but the return value is cached on the
|
||||
``DirEntry`` object; doesn't require a system call in most cases;
|
||||
don't follow symbolic links if ``follow_symlinks`` is False
|
||||
|
||||
* ``is_symlink()``: similar to ``pathlib.Path.is_symlink()``, but the
|
||||
return value is cached on the ``DirEntry`` object; doesn't require a
|
||||
system call in most cases
|
||||
|
||||
* ``stat(*, follow_symlinks=True)``: like ``os.stat()``, but the
|
||||
return value is cached on the ``DirEntry`` object; does not require a
|
||||
system call on Windows (except for symlinks); don't follow symbolic links
|
||||
(like ``os.lstat()``) if ``follow_symlinks`` is False
|
||||
|
||||
* ``inode()``: return the inode number of the entry; the return value
|
||||
is cached on the ``DirEntry`` object
|
||||
|
||||
Here's a very simple example of ``scandir()`` showing use of the
|
||||
``DirEntry.name`` attribute and the ``DirEntry.is_dir()`` method:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def subdirs(path):
|
||||
"""Yield directory names not starting with '.' under given path."""
|
||||
for entry in os.scandir(path):
|
||||
if not entry.name.startswith('.') and entry.is_dir():
|
||||
yield entry.name
|
||||
|
||||
This ``subdirs()`` function will be significantly faster with scandir
|
||||
than ``os.listdir()`` and ``os.path.isdir()`` on both Windows and POSIX
|
||||
systems, especially on medium-sized or large directories.
|
||||
|
||||
|
||||
Further reading
|
||||
---------------
|
||||
|
||||
* `The Python docs for scandir <https://docs.python.org/3.5/library/os.html#os.scandir>`_
|
||||
* `PEP 471 <https://www.python.org/dev/peps/pep-0471/>`_, the
|
||||
(now-accepted) Python Enhancement Proposal that proposed adding
|
||||
``scandir`` to the standard library -- a lot of details here,
|
||||
including rejected ideas and previous discussion
|
||||
|
||||
|
||||
Flames, comments, bug reports
|
||||
-----------------------------
|
||||
|
||||
Please send flames, comments, and questions about scandir to Ben Hoyt:
|
||||
|
||||
http://benhoyt.com/
|
||||
|
||||
File bug reports for the version in the Python 3.5 standard library
|
||||
`here <https://docs.python.org/3.5/bugs.html>`_, or file bug reports
|
||||
or feature requests for this module at the GitHub project page:
|
||||
|
||||
https://github.com/benhoyt/scandir
|
||||
|
||||
|
8
vendor/scandir-1.10.0.dist-info/RECORD
vendored
Normal file
8
vendor/scandir-1.10.0.dist-info/RECORD
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
scandir-1.10.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
scandir-1.10.0.dist-info/LICENSE.txt,sha256=peL73COXREGdKUB828knk8TZwdlWwXT3y3-W-m0FjIY,1464
|
||||
scandir-1.10.0.dist-info/METADATA,sha256=UUFB4SuFl0DlrwKQIYe_JNraKU0yPd33d2b7XSBdaiM,9558
|
||||
scandir-1.10.0.dist-info/RECORD,,
|
||||
scandir-1.10.0.dist-info/WHEEL,sha256=cOb9BSSJE-pGL62EpjoA_22lJ5wGdH3IHNzDlb0xqWE,104
|
||||
scandir-1.10.0.dist-info/top_level.txt,sha256=Ixze5mNjmis99ql7JEtAYc9-djJMbfRx-FFw3R_zZf8,17
|
||||
scandir.py,sha256=97C2AQInuKk-Phb3aXM7fJomhc-00pZMcBur23NUmrE,24827
|
||||
scandir.pyc,,
|
5
vendor/scandir-1.10.0.dist-info/WHEEL
vendored
Normal file
5
vendor/scandir-1.10.0.dist-info/WHEEL
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.37.1)
|
||||
Root-Is-Purelib: false
|
||||
Tag: cp27-cp27m-linux_x86_64
|
||||
|
2
vendor/scandir-1.10.0.dist-info/top_level.txt
vendored
Normal file
2
vendor/scandir-1.10.0.dist-info/top_level.txt
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
_scandir
|
||||
scandir
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user