__  __    __   __  _____      _            _          _____ _          _ _ 
 |  \/  |   \ \ / / |  __ \    (_)          | |        / ____| |        | | |
 | \  / |_ __\ V /  | |__) | __ ___   ____ _| |_ ___  | (___ | |__   ___| | |
 | |\/| | '__|> <   |  ___/ '__| \ \ / / _` | __/ _ \  \___ \| '_ \ / _ \ | |
 | |  | | |_ / . \  | |   | |  | |\ V / (_| | ||  __/  ____) | | | |  __/ | |
 |_|  |_|_(_)_/ \_\ |_|   |_|  |_| \_/ \__,_|\__\___| |_____/|_| |_|\___V 2.1
 if you need WebShell for Seo everyday contact me on Telegram
 Telegram Address : @jackleet
        
        
For_More_Tools: Telegram: @jackleet | Bulk Smtp support mail sender | Business Mail Collector | Mail Bouncer All Mail | Bulk Office Mail Validator | Html Letter private



Upload:

Command:

[email protected]: ~ $
�

C��g�v���SSKrSSKJr SSKJr SSKrSS/rSSKJr	 \r
\ Sr\4SjrS	rS
rSrSrS
rSrSrSrSDSjrSr"SS5r"SS5r"SS\5r Sr!Sr"Sr#Sr$Sr%Sr&"SS \
5r'"S!S"\'5r("S#S$\'5r)SES%jr*SES&jr+\RX"S'\RZ\R\-5r/\RX"S(\RZ\R\-5r0\RX"S)\RZ\R\-5r1S*r2\RX"S+5r3S,r4S-r5S.r6S/r7S0r8SDS1jr9\RX"S2\Rt5r;S3r<\RX"S45r=S5r>S6r?S7r@S8rAS9rBS:rCSDS;jrDS<rES=rFS>rGS?rH"S@SA\R�5rJ\KSB:XaSSCKJLrL \LR�"5 gg!\
a
 SSKJr	 GN�f=f!\a \r
GN�f=f!\a \rGN�f=f)F�N)�etree)�fragment_fromstring�
html_annotate�htmldiff)�escapec�>�S[[U5S5<SU<S3$)Nz
<span title="�z">z</span>)�html_escape�_unicode)�text�versions  �0/usr/lib/python3/dist-packages/lxml/html/diff.py�default_markuprs���H�W�%�q�)�4�1�1�c���UVVs/sHup#[X#5PM nnnUSnUSSHn[XV5 UnM [U5n[XQ5nSR	U5R5$s snnf)a�
doclist should be ordered from oldest to newest, like::

    >>> version1 = 'Hello World'
    >>> version2 = 'Goodbye World'
    >>> print(html_annotate([(version1, 'version 1'),
    ...                      (version2, 'version 2')]))
    <span title="version 2">Goodbye</span> <span title="version 1">World</span>

The documents must be *fragments* (str/UTF8 or unicode), not
complete documents

The markup argument is a function to markup the spans of words.
This function is called like markup('Hello', 'version 2'), and
returns HTML.  The first argument is text and never includes any
markup.  The default uses a span with a title:

    >>> print(default_markup('Some Text', 'by Joe'))
    <span title="by Joe">Some Text</span>
rr	N�)�tokenize_annotated�html_annotate_merge_annotations�compress_tokens�markup_serialize_tokens�join�strip)�doclist�markup�docr
�	tokenlist�
cur_tokens�tokens�results        rrr"s���6&-�.�%,�\�S�$�C�1�%,��.��1��J��A�B�-��'�
�;��
� �!��,�J�
$�Z�
8�F�
�7�7�6�?� � �"�"��.s�A3c�8�[USS9nUH	nXlM U$)zFTokenize a document and add an annotation attribute to each token
    F��
include_hrefs)�tokenize�
annotation)rr$r�toks    rrrJs$���c��
/�F���#����Mrc��[XS9nUR5nUH"upEpgnUS:XdMXUn	XUn
[X�5 M$ g)z�Merge the annotations from tokens_old into tokens_new, when the
tokens in the new document already existed in the old document.
��a�b�equalN)�InsensitiveSequenceMatcher�get_opcodes�copy_annotations)�
tokens_old�
tokens_new�s�commands�command�i1�i2�j1�j2�eq_old�eq_news           rrrRsN��	#�Z�>�A��}�}��H�#+���R�R��g���2�&�F��2�&�F��V�,�	$,rc��[U5[U5:Xde[X5Hup#URUlM g)zF
Copy annotations from the tokens listed in src to the tokens in dest
N)�len�zipr$)�src�dest�src_tok�dest_toks    rr-r-_s9���s�8�s�4�y� � � � ��^���%�0�0���,rc���US/nUSSHcnUSR(d;UR(d*USRUR:Xa
[X5 MRUR	U5 Me U$)za
Combine adjacent tokens when there is no HTML between the tokens, 
and they share an annotation
rr	N���)�	post_tags�pre_tagsr$�compress_merge_back�append)rrr%s   rrrgsd��
�Q�i�[�F��a�b�z���r�
�$�$�����2�J�!�!�S�^�^�3���,��M�M�#��
��Mrc�T�USn[U5[Ld[U5[LaURU5 g[U5nUR(aX2R-
nX1-
n[UUR
URURS9nURUlX@S'g)zTMerge tok into the last element of tokens (modifying the list of
tokens in-place).  rA�rCrB�trailing_whitespaceN)�type�tokenrErrHrCrBr$)rr%�lastr�mergeds     rrDrDvs����"�:�D��D�z���$�s�)�5�"8��
�
�c����~���#�#��,�,�,�D�����t� $�
�
�!$���+.�+B�+B�D��!�O�O����r�
rc#�# �UHpnURShv�N UR5nU"X2R5nUR(aX2R-
nUv� URShv�N Mr gNbN7f)zn
Serialize the list of tokens into a list of text chunks, calling
markup_func around text to add annotations.
N)rC�htmlr$rHrB)r�markup_funcrJrNs    rrr�sj���
���>�>�!�!��z�z�|���4�!1�!1�2���$�$��-�-�-�D��
��?�?�"�"��!�	#�s"�A>�A:�AA>�0A<�1
A>�<A>c��[U5n[U5n[X#5nSRU5R5n[	U5$)a_Do a diff of the old and new document.  The documents are HTML
*fragments* (str/UTF8 or unicode), they are not complete documents
(i.e., no <html> tag).

Returns HTML with <ins> and <del> tags added around the
appropriate text.  

Markup is generally ignored, with the markup from new_html
preserved, and possibly some markup from old_html (though it is
considered acceptable to lose some of the old markup).  Only the
words in the HTML are diffed.  The exception is <img> tags, which
are treated like words, and the href attribute of <a> tags, which
are noted inside the tag itself when there are changes.
r)r#�htmldiff_tokensrr�fixup_ins_del_tags)�old_html�new_html�old_html_tokens�new_html_tokensrs     rrr�sC��"�x�(�O��x�(�O�
�_�
>�F�
�W�W�V�_�
"�
"�
$�F��f�%�%rc�F�[XS9nUR5n/nUHtupVpxn	US:XaUR[XU	SS95 M*US:XdUS:Xa[XU	5n
[	X�5 US:XdUS:XdM\[XU5n[X�5 Mv [
U5nU$)zTDoes a diff on the tokens themselves, returning a list of text
chunks (not tokens).
r'r*T)r*�insert�replace�delete)r+r,�extend�
expand_tokens�merge_insert�merge_delete�cleanup_delete)�html1_tokens�html2_tokensr0r1rr2r3r4r5r6�
ins_tokens�
del_tokenss            rrQrQ�s���"	#�\�B�A��}�}��H�
�F�#+���R�R��g���M�M�-���(;�4�H�I���h��'�Y�"6�&�|�r�':�;�J���,��h��'�Y�"6�&�|�r�':�;�J���,�$,��F�
#�F��Mrc#�,# �UH�nURShv�N U(aUR(dCUR(a UR5UR-v� OUR5v� URShv�N M� gNxN7f)z]Given a list of tokens, return a generator of the chunks of
text for the data in the tokens.
N)rC�hide_when_equalrHrNrB)rr*rJs   rr\r\�sj������>�>�!�!��E�1�1��(�(��j�j�l�U�%>�%>�>�>��j�j�l�"��?�?�"�"��!�	#�s"�B�B�A.B�B�
B�Bc�z�[U5up#nURU5 U(a&USRS5(d
US==S-
ss'URS5 U(a$USRS5(aUSSSUS'URU5 URS5 URU5 g)zwdoc is the already-handled document (as a list of text chunks);
here we add <ins>ins_chunks</ins> to the end of that.  rA� z<ins>Nz</ins> )�split_unbalancedr[�endswithrE)�
ins_chunksr�unbalanced_start�balanced�unbalanced_ends     rr]r]�s���2B�*�1M�.����J�J�� �
�3�r�7�#�#�C�(�(�	�B��3����J�J�w���H�R�L�)�)�#�.�.���|�C�R�(�����J�J�x���J�J�y���J�J�~�rc��\rSrSrSrg)�	DEL_START��N��__name__�
__module__�__qualname__�__firstlineno__�__static_attributes__rqrrroro����rroc��\rSrSrSrg)�DEL_END�rqNrrrqrrrzrz�rxrrzc��\rSrSrSrSrg)�	NoDeletesizTRaised when the document no longer contains any pending deletes
(DEL_START/DEL_END) rqN)rsrtrurv�__doc__rwrqrrr}r}s��rr}c�z�UR[5 URU5 UR[5 g)z�Adds the text chunks in del_chunks to the document doc (another
list of text chunks) with marker to show it is a delete.
cleanup_delete later resolves these markers into <del> tags.N)rEror[rz)�
del_chunksrs  rr^r^s(���J�J�y���J�J�z���J�J�w�rc���[U5upn[U5upEn[XAU5 [	XaU5 UnU(a&USRS5(d
US==S-
ss'UR
S5 U(a$USRS5(aUSSSUS'URU5 UR
S5 URU5 UnM�![a U$f=f)a�Cleans up any DEL_START/DEL_END markers in the document, replacing
them with <del></del>.  To do this while keeping the document
valid, it may need to drop some tags (either start or end tags).

It may also move the del into adjacent tags to try to move it to a
similar location where it was originally located (e.g., moving a
delete into preceding <div> tag, if the del looks like (DEL_START,
'Text</div>', DEL_END)rArgz<del>Nz</del> )�split_deleter}rh�locate_unbalanced_start�locate_unbalanced_endrirEr[)�chunks�
pre_deleterZ�post_deleterkrlrmrs        rr_r_
s����	�.:�6�.B�+�J��6F�f�5M�2��N�	 � 0�k�J��n�+�F����s�2�w�'�'��,�,���G�s�N�G��
�
�7������-�-�c�2�2�#�B�<���,�H�R�L��
�
�8���
�
�9���
�
�;����7���	��(�M�-	�s�C�
C'�&C'c�F�/n/n/n/nUGH>nURS5(dURU5 M-USS:HnUR5SRS5nU[;aURU5 MtU(a�U(a6USSU:Xa*URU5 UR5upxn	X�U'M�U(a=UR
UVVV	s/sHupxo�PM	 sn	nn5 /nURU5 M�URU5 GMURU[U5U45 URS5 GMA UR
UVVVs/sHupxoUPM	 snnn5 UVs/sH	oUcMUPM nnXU4$s sn	nnfs snnnfs snf)aIReturn (unbalanced_start, balanced, unbalanced_end), where each is
a list of text and tag chunks.

unbalanced_start is a list of all the tags that are opened, but
not closed in this span.  Similarly, unbalanced_end is a list of
tags that are closed but were not opened.  Extracting these might
mean some reordering of the chunks.�<r	�/r�<>/rAN)�
startswithrE�splitr�
empty_tags�popr[r:)
r��start�end�	tag_stackrl�chunk�endtag�name�pos�tags
          rrhrh4so��
�E�
�C��I��H�������$�$��O�O�E�"���q��S����{�{�}�Q��%�%�e�,���:���O�O�E�"����Y�r�]�1�-��5�����&�!*������3� #��
�����	�B�	�n�d��c�	�B�C��	��
�
�5�!��
�
�5�!����d�C��M�5�9�:��O�O�D�!�-�.
�L�L�'0�1�y�#�4�e��y�1�3�#+�A�8�%��8�H�A��C����C��	2��As�F�F�:F�Fc��UR[5nUR[5nUSUXS-UXS-S4$![a [ef=f)z�Returns (stuff_before_DEL_START, stuff_inside_DEL_START_END,
stuff_after_DEL_END).  Returns the first case found (there may be
more DEL_STARTs in stuff_after_DEL_END).  Raises NoDeletes if
there's no DEL_START found. Nr	)�indexro�
ValueErrorr}rz)r�r��pos2s   rr�r�\s_��
��l�l�9�%���<�<�� �D��$�3�<��A��d�+�V��F�G�_�<�<�������s	�>�Ac��U(dg	USnUR5SRS5nU(dg	USnU[LdURS5(dg	USS:Xag	UR5SRS5nUS:Xag	US:wd
SU-5eXd:Xa2UR	S5 URUR	S55 Og	M�)
akpre_delete and post_delete implicitly point to a place in the
document (where the two were split).  This moves that point (by
popping items from one and pushing them onto the other).  It moves
the point to try to find a place where unbalanced_start applies.

As an example::

    >>> unbalanced_start = ['<div>']
    >>> doc = ['<p>', 'Text', '</p>', '<div>', 'More Text', '</div>']
    >>> pre, post = doc[:3], doc[3:]
    >>> pre, post
    (['<p>', 'Text', '</p>'], ['<div>', 'More Text', '</div>'])
    >>> locate_unbalanced_start(unbalanced_start, pre, post)
    >>> pre, post
    (['<p>', 'Text', '</p>', '<div>'], ['More Text', '</div>'])

As you can see, we moved the point so that the dangling <div> that
we found will be effectively replaced by the div in the original
document.  If this doesn't work out, we just throw away
unbalanced_start without doing anything.
r	rz<>r�r��ins�delzUnexpected delete tag: %rN)r�rror�r�rE)rkr�r��finding�finding_name�nextr�s       rr�r�hs���,���"�1�%���}�}��q�)�/�/��5�����1�~���9��D�O�O�C�$8�$8����7�c�>���z�z�|�A��$�$�T�*���5�=���u�}�	0�'�$�.�	0�}���� � ��#����k�o�o�a�0�1�
�5rc��U(dgUSnUR5SRS5nU(dgUSnU[LdURS5(dgUR5SRS5nUS:XdUS:XagXd:Xa1UR	5 URSUR	55 OgM�)zolike locate_unbalanced_start, except handling end tags and
possibly moving the point earlier in the document.  rArr��</r�r�N)r�rrzr�r�rX)rmr�r�r�r�r�r�s       rr�r��s������ ��$���}�}��q�)�/�/��6�����"�~���7�?�$�/�/�$�"7�"7���z�z�|�A��$�$�U�+���5�=�D�E�M������� ����q�*�.�.�"2�3�
�+rc�2�\rSrSrSrSrS	SjrSrSrSr	g)
rJi�aRepresents a diffable token, generally a word that is displayed to
the user.  Opening tags are attached to this token when they are
adjacent (pre_tags) and closing tags that follow the word
(post_tags).  Some exceptions occur when there are empty tags
adjacent to a word, so there may be close tags in pre_tags, or
open tags in post_tags.

We also keep track of whether the word was originally followed by
whitespace, even though we do not want to treat the word as
equivalent to a similar word that does not have a trailing
space.FNc��[RX5nUbX%lO/UlUbX5lO/UlXElU$�N)r�__new__rCrBrH)�clsrrCrBrH�objs      rr��
token.__new__�sA�����s�)����#�L��C�L�� �%�M��C�M�"5���
rc	��S[RU5<SUR<SUR<SUR<S3	$)Nztoken(�, �))r�__repr__rCrBrH��selfs rr��token.__repr__�s1��*2�*;�*;�D�*A�4�=�=�*.�.�.�$�:R�:R�T�	Trc��[U5$r�)rr�s rrN�
token.html�s����~�rrq�NNr)
rsrtrurvr~rer�r�rNrwrqrrrJrJ�s��
��O��"T�rrJc�2�\rSrSrSrSSjrSrSrSrg)	�	tag_tokeni�z�Represents a token that is actually a tag.  Currently this is just
the <img> tag, which takes up visible space just like a word but
is only represented in a document by a tag.  Nc�n�[RU[<SU<3UUUS9nXlX'lX7lU$)Nz: rG)rJr�rIr��data�	html_repr)r�r�r�r�rCrBrHr�s        rr��tag_token.__new__�s>���m�m�C�T�4�!8�%-�&/�0C��E������!�
��
rc
��SUR<SUR<SUR<SUR<SUR<SUR
<S3
$)Nz
tag_token(r�z, html_repr=z, post_tags=z, pre_tags=z, trailing_whitespace=r�)r�r�r�rCrBrHr�s rr��tag_token.__repr__�s8���H�H��I�I��N�N��M�M��N�N��$�$�
&�	&rc��UR$r�)r�r�s rrN�tag_token.html�s���~�~�rrqr�)	rsrtrurvr~r�r�rNrwrqrrr�r��s��5�59�46�	�&�rr�c�"�\rSrSrSrSrSrSrg)�
href_tokeni�zcRepresents the href in an anchor tag.  Unlike other words, we only
show the href when it changes.  Tc��SU-$)Nz	 Link: %srqr�s rrN�href_token.htmls
���T�!�!rrqN)rsrtrurvr~rerNrwrqrrr�r��s��(��O�"rr�c�~�[R"U5(aUnO
[USS9n[USUS9n[	U5$)aC
Parse the given HTML and returns token objects (words with attached tags).

This parses only the content of a page; anything in the head is
ignored, and the <head> and <body> elements are themselves
optional.  The content is then parsed by lxml, which ensures the
validity of the resulting parsed document (though lxml may make
incorrect guesses when the markup is particular bad).

<ins> and <del> tags are also eliminated from the document, as
that gets confusing.

If include_hrefs is true, then the href attribute of <a> tags is
included as a special kind of diffable token.T��cleanup)�skip_tagr")r�	iselement�
parse_html�
flatten_el�fixup_chunks)rNr"�body_elr�s    rr#r#s=��
���t������T�4�0��
��$�m�
L�F����rc�:�U(a[U5n[USS9$)z�
Parses an HTML fragment, returning an lxml element.  Note that the HTML will be
wrapped in a <div> tag that was not in the original document.

If cleanup is true, make sure there's no <head> or <body>, and get
rid of any <ins> and <del> tags.
T)�
create_parent)�cleanup_htmlr)rNr�s  rr�r�s����D�!���t�4�8�8rz	<body.*?>z
</body.*?>z</?(ins|del).*?>c���[RU5nU(aXR5Sn[RU5nU(aUSUR	5n[
R
SU5nU$)z�This 'cleans' the HTML, meaning that any page structure is removed
(only the contents of <body> are used, if there is any <body).
Also <ins> and <del> tags are removed.  Nr)�_body_re�searchr��_end_body_rer��_ins_del_re�sub)rN�matchs  rr�r�,s_��
�O�O�D�!�E���I�I�K�L�!������%�E���N�U�[�[�]�#���?�?�2�t�$�D��Krz
[ \t\n\r]$c�F�[UR55nUSUXS4$)zP
    This function takes a word, such as 'test

' and returns ('test','

')
    rN)r:�rstrip)�word�stripped_lengths  r�split_trailing_whitespacer�<s,���$�+�+�-�(�O���/�"�D�)9�$:�:�:rc
��/nSn/nUGHIn[U[5(akUSS:Xa5USn[US5upg[SXVUUS9n/nUR	U5 O+USS:Xa"USn[X�SS	9n/nUR	U5 M�[
U5(a,[U5upG[XAUS	9n/nUR	U5 M�[U5(aUR	U5 M�[U5(aWU(aUR	U5 GMU(dS
U<SU<SU<S
U<35eURR	U5 GMJe U(d[SUS9/$USRRU5 U$)zE
This function takes a list of chunks and produces a list of tokens.
Nr�imgr	�)r�rCrH�hrefrg)rCrHzWeird state, cur_word=z	, result=z	, chunks=z of r)rCrA)�
isinstance�tupler�r�rEr��is_wordrJ�is_start_tag�
is_end_tagrBr[)	r��	tag_accum�cur_wordrr�r<r�rHr�s	         rr�r�Ds����I��H�
�F����e�U�#�#��Q�x�5� ��A�h��+D�U�1�X�+N�(��$�U�C�.7�9L�N���	��
�
�h�'��q��V�#��Q�x��%�d�TW�X���	��
�
�h�'���5�>�>�)B�5�)I�&�E��U�L_�`�H��I��M�M�(�#�
�%�
 �
 ����U�#�
��
�
��� � ��'��9�����8�9�x��"�"�)�)�%�0��5�I�L��b�9�-�.�.��r�
���#�#�I�.��Mr)
�paramr��area�br�basefont�input�base�meta�link�col)�address�
blockquote�center�dir�div�dl�fieldset�form�h1�h2�h3�h4�h5�h6�hr�isindex�menu�noframes�noscript�ol�p�pre�table�ul)
�dd�dt�frameset�li�tbody�td�tfoot�th�thead�trc#�# �U(d=URS:Xa SURS5[U54v� O
[U5v� UR[;a3UR(d"[U5(dUR(dg[UR5nUHn[U5v� M UHn[XQS9Shv�N M URS:Xa2URS5(aU(aSURS54v� U(d9[U5v� [UR5nUHn[U5v� M ggN�7f)z�Takes an lxml element el, and generates all the text chunks for
that tag.  Each start tag is a chunk, each word is a chunk, and each
end tag is a chunk.

If skip_tag is true, then the outermost container tag is
not returned (just its contents).r�r<Nr!r(r�)r��get�	start_tagr�rr:�tail�split_wordsr
r��end_tag)�elr"r��start_wordsr��child�	end_wordss       rr�r��s�����
�6�6�U�?��"�&�&��-��2��7�7��B�-��	�v�v���B�G�G�C��G�G�B�G�G���b�g�g�&�K����$�������e�A�A�A��	�v�v��}�������M��r�v�v�f�~�&�&���b�k������(�	��D��d�#�#���	B�s�CE�
E�BEz\S+(?:\s+|$)c�l�U(aUR5(d/$[RU5nU$)zZSplits some text into words. Includes trailing whitespace
on each word when appropriate.  )r�split_words_re�findall)r�wordss  rrr�s+���t�z�z�|�|��	��"�"�4�(�E��Lrz
^[ \t\n\r]c���SUR<SRURR5VVs/sHupSU<S[	US5<S3PM snn5<S3$s snnf)z5
The text representation of the start tag for a tag.
r�rrgz="T�"�>)r�r�attrib�itemsr
)rr��values   rrr�s^��
	������,.�I�I�O�O�,=�?�,=�[�T�5�(,�[���-E�F�,=�?�@�A�A��?s�!A$c��UR(a'[RUR5(aSnOSnSUR<SU<3$)zbThe text representation of an end tag for a tag.  Includes
trailing whitespace when appropriate.  rgrr�r!)r�start_whitespace_rer�r�)r�extras  rrr�s:��
�w�w�&�-�-�b�g�g�6�6����������&�&rc�.�URS5(+$)Nr��r��r%s rr�r��s���~�~�c�"�"�"rc�$�URS5$)Nr�r)r*s rr�r��s���>�>�$��rc�^�URS5=(a URS5(+$)Nr�r�r)r*s rr�r��s"���>�>�#��;�s�~�~�d�';�#;�;rc�D�[USS9n[U5 [USS9nU$)z�Given an html string, move any <ins> or <del> tags inside of any
block-level elements, e.g. transform <ins><p>word</p></ins> to
<p><ins>word</ins></p> Fr�T)�
skip_outer)r��_fixup_ins_del_tags�serialize_html_fragment)rNrs  rrRrR�s)���T�5�
)�C����"�3�4�8�D��Krc��[U[5(a
SU-5e[R"US[S9nU(a:X"RS5S-SnUSUR
S5nUR5$U$)z�Serialize a single lxml element as HTML.  The serialized form
includes the elements tail.  

If skip_outer is true, then don't serialize the outermost tag
z3You should pass in an element, not a string like %rrN)�method�encodingr!r	Nr�)r��
basestringr�tostringr�find�rfindr)rr.rNs   rr0r0�s{���"�j�)�)�D�=��B�D�)��>�>�"�V�h�?�D���I�I�c�N�1�$�%�&���$�T�Z�Z��_�%���z�z�|���rc��SHInURSU-5H.n[U5(dM[X!S9 UR5 M0 MK g)z?fixup_ins_del_tags that works on an lxml document in-place
    )r�r�zdescendant-or-self::%s)r�N)�xpath�_contains_block_level_tag�_move_el_inside_block�drop_tag)rr�rs   rr/r/sF�����)�)�4�s�:�;�B�,�R�0�0��!�"�.��K�K�M�	<�rc��UR[;dUR[;agUHn[U5(dM g g)zPTrue if the element contains any block-level elements, like <p>, <td>, etc.
    TF)r��block_level_tags�block_level_container_tagsr:)rrs  rr:r:s=��
�v�v�!�!�R�V�V�/I�%I����$�U�+�+���rc��UHn[U5(dM OQ [R"U5nURUlSUlUR	[U55 U/USS&g[U5H�n[U5(ar[
X!5 UR(aT[R"U5nURUlSUlURURU5S-U5 M�M�[R"U5nURX%5 URU5 M� UR(aA[R"U5nURUlSUlURSU5 gg)zohelper for _fixup_ins_del_tags; actually takes the <ins> etc tags
and moves them inside any block-level tags.  Nr	r)r:r�Elementrr[�listr;rrXr�rYrE)rr�r�children_tag�tail_tag�	child_tag�text_tags       rr;r;s-����$�U�+�+���
�}�}�S�)���G�G���������D��H�%����1����b���$�U�+�+�!�%�-��z�z� �=�=��-�� %�
�
��
�!��
��	�	�"�(�(�5�/�!�+�X�6�	��
�
�c�*�I��J�J�u�(����U�#��
�w�w��=�=��%������
����
�	�	�!�X��	rc�z�UR5nUR=(d SnUR(aj[U5(dX R-
nOKUSR(a#US=RUR-
slOURUSlUR	U5nU(anUS:XaSnOXS-
nUc.UR(aU=RU-
slO4X!lO-UR(aU=RU-
slOX$lUR5XUS-&g)z�
Removes an element, but merges its contents into its place, e.g.,
given <p>Hi <i>there!</i></p>, if you remove the <i> element you get
<p>Hi there!</p>
rrArNr	)�	getparentrrr:r��getchildren)r�parentrr��previouss     r�_merge_element_contentsrL9s����\�\�^�F�
�7�7�=�b�D�	�w�w��2�w�w��G�G�O�D��"�v�{�{��2����r�w�w�&�� �g�g��2����L�L���E���A�:��H��A�g��H����{�{����t�#��"���}�}��
�
��%�
� $�
��N�N�,�F��q��rc�"�\rSrSrSrSrSrSrg)r+i[zh
Acts like SequenceMatcher, but tries not to find very small equal
blocks amidst large spans of changes
r�c�0�[[UR5[UR55n[URUS-5n[R
R
U5nUVs/sHnUSU:�dUS(aMUPM sn$s snf)N�r�)�minr:r)�	threshold�difflib�SequenceMatcher�get_matching_blocks)r��sizerQ�actual�items     rrT�.InsensitiveSequenceMatcher.get_matching_blockscs~���3�t�v�v�;��D�F�F��,�������q��1�	��(�(�<�<�T�B��!'� �����7�Y�&��A�w��� �	 �� s�/B�
BrqN)rsrtrurvr~rQrTrwrqrrr+r+[s���
�I� rr+�__main__)�_diffcommand)F)T)NrR�lxmlr�	lxml.htmlr�re�__all__rNrr
�ImportError�cgi�unicoder�	NameError�strr4rrrrr-rrDrrrQr\r]rorz�	Exceptionr}r^r_rhr�r�r�rJr�r�r#r��compile�I�Sr�r�r�r��end_whitespace_rer�r�r�r>r?r��Urrr&rrr�r�r�rRr0r/r:r;rLrSr+rsrZ�mainrqrr�<module>rksV����)�	��J�
'��*�*���H���1�#1�&#�P�-�1�
��$#�&&�.$�L#��.	�	�	�	��	���%�N& �P
=�0�d�4'�H�'�R���8"��"� �09��:�:�l�B�D�D����I�.���z�z�-����b�d�d��3���j�j�,�b�d�d�2�4�4�i�8����J�J�}�-��;�2�l#�
���6��$�6���O�R�T�T�2����j�j��/��A�'�#� �<���$���@ -�D ��!8�!8� � �z��&�������}�*�)�)�*�����H���
���J��s3�G�G+�G:�G(�'G(�+G7�6G7�:H�H

Filemanager

Name Type Size Permission Actions
ElementSoup.cpython-313.pyc File 574 B 0644
__init__.cpython-313.pyc File 78.33 KB 0644
_diffcommand.cpython-313.pyc File 3.78 KB 0644
_html5builder.cpython-313.pyc File 5.81 KB 0644
_setmixin.cpython-313.pyc File 2.48 KB 0644
builder.cpython-313.pyc File 5.2 KB 0644
clean.cpython-313.pyc File 588 B 0644
defs.cpython-313.pyc File 3.24 KB 0644
diff.cpython-313.pyc File 33.69 KB 0644
formfill.cpython-313.pyc File 12.25 KB 0644
html5parser.cpython-313.pyc File 9.08 KB 0644
soupparser.cpython-313.pyc File 11.53 KB 0644
usedoctest.cpython-313.pyc File 462 B 0644
Filemanager