Description: <short summary of the patch>
 TODO: Put a short summary on the line above and replace this paragraph
 with a longer explanation of this change. Complete the meta-information
 with other relevant fields (see below for details). To make it easier, the
 information below has been extracted from the changelog. Adjust it or drop
 it.
 .
 youtube-dl (1:2020.06.06-1~antix1) unstable; urgency=medium
 .
   * new upstream release
Author: anticapitalista <antix@operamail.com>

---
The information above should follow the Patch Tagging Guidelines, please
checkout http://dep.debian.net/deps/dep3/ to learn about the format. Here
are templates for supplementary fields that you might want to add:

Origin: <vendor|upstream|other>, <url of original patch>
Bug: <url in upstream bugtracker>
Bug-Debian: https://bugs.debian.org/<bugnumber>
Bug-Ubuntu: https://launchpad.net/bugs/<bugnumber>
Forwarded: <no|not-needed|url proving that it has been forwarded>
Reviewed-By: <name and email of someone who approved the patch>
Last-Update: 2020-06-08

--- youtube-dl-2020.06.06.orig/ChangeLog
+++ youtube-dl-2020.06.06/ChangeLog
@@ -1,41 +1,3 @@
-version 2020.06.06
-
-Extractors
-* [tele5] Bypass geo restriction
-+ [jwplatform] Add support for bypass geo restriction
-* [tele5] Prefer jwplatform over nexx (#25533)
-* [twitch:stream] Expect 400 and 410 HTTP errors from API
-* [twitch:stream] Fix extraction (#25528)
-* [twitch] Fix thumbnails extraction (#25531)
-+ [twitch] Pass v5 Accept HTTP header (#25531)
-* [brightcove] Fix subtitles extraction (#25540)
-+ [malltv] Add support for sk.mall.tv (#25445)
-* [periscope] Fix untitled broadcasts (#25482)
-* [jwplatform] Improve embeds extraction (#25467)
-
-
-version 2020.05.29
-
-Core
-* [postprocessor/ffmpeg] Embed series metadata with --add-metadata
-* [utils] Fix file permissions in write_json_file (#12471, #25122)
-
-Extractors
-* [ard:beta] Extend URL regular expression (#25405)
-+ [youtube] Add support for more invidious instances (#25417)
-* [giantbomb] Extend URL regular expression (#25222)
-* [ard] Improve URL regular expression (#25134, #25198)
-* [redtube] Improve formats extraction and extract m3u8 formats (#25311,
-  #25321)
-* [indavideo] Switch to HTTPS for API request (#25191)
-* [redtube] Improve title extraction (#25208)
-* [vimeo] Improve format extraction and sorting (#25285)
-* [soundcloud] Reduce API playlist page limit (#25274)
-+ [youtube] Add support for yewtu.be (#25226)
-* [mailru] Fix extraction (#24530, #25239)
-* [bellator] Fix mgid extraction (#25195)
-
-
 version 2020.05.08
 
 Core
--- youtube-dl-2020.06.06.orig/Makefile
+++ youtube-dl-2020.06.06/Makefile
@@ -30,7 +30,10 @@ install: youtube-dl youtube-dl.1 youtube
 	install -m 644 youtube-dl.fish $(DESTDIR)$(SYSCONFDIR)/fish/completions/youtube-dl.fish
 
 codetest:
-	flake8 .
+	flake8 \
+		--exclude=.svn,CVS,.bzr,.hg,.git,__pycache__,.tox,.eggs,*.egg,.pc,.pybuild \
+		--ignore=E121,E123,E126,E226,E24,E704,E402,E501,E731,E741,W503,W504,N802,N803,N806,N816,F401,F821,E302,E305,N801,N815,E265,N813,F403,F405 \
+		.
 
 test:
 	#nosetests --with-coverage --cover-package=youtube_dl --cover-html --verbose --processes 4 test
@@ -49,7 +52,10 @@ offlinetest: codetest
 		--exclude test_subtitles.py \
 		--exclude test_write_annotations.py \
 		--exclude test_youtube_lists.py \
-		--exclude test_youtube_signature.py
+		--exclude test_youtube_signature.py \
+		--exclude test_downloader_http.py \
+		--exclude test_InfoExtractor.py \
+		--exclude test_http.py
 
 tar: youtube-dl.tar.gz
 
--- youtube-dl-2020.06.06.orig/README.md
+++ youtube-dl-2020.06.06/README.md
@@ -53,9 +53,6 @@ Alternatively, refer to the [developer i
 # OPTIONS
     -h, --help                       Print this help text and exit
     --version                        Print program version and exit
-    -U, --update                     Update this program to latest version. Make
-                                     sure that you have sufficient permissions
-                                     (run with sudo if needed)
     -i, --ignore-errors              Continue on download errors, for example to
                                      skip unavailable videos in a playlist
     --abort-on-error                 Abort downloading of further videos (in the
@@ -1032,7 +1029,7 @@ After you have ensured this site is dist
 5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
 7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want.
-8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
+8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](http://flake8.pycqa.org/en/latest/index.html#quickstart):
 
         $ flake8 youtube_dl/extractor/yourextractor.py
 
--- youtube-dl-2020.06.06.orig/setup.py
+++ youtube-dl-2020.06.06/setup.py
@@ -58,12 +58,7 @@ py2exe_params = {
 if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
     params = py2exe_params
 else:
-    files_spec = [
-        ('etc/bash_completion.d', ['youtube-dl.bash-completion']),
-        ('etc/fish/completions', ['youtube-dl.fish']),
-        ('share/doc/youtube_dl', ['README.txt']),
-        ('share/man/man1', ['youtube-dl.1'])
-    ]
+    files_spec = []
     root = os.path.dirname(os.path.abspath(__file__))
     data_files = []
     for dirname, files in files_spec:
--- youtube-dl-2020.06.06.orig/youtube_dl/__init__.py
+++ youtube-dl-2020.06.06/youtube_dl/__init__.py
@@ -36,7 +36,6 @@ from .utils import (
     write_string,
     render_table,
 )
-from .update import update_self
 from .downloader import (
     FileDownloader,
 )
@@ -441,7 +440,12 @@ def _real_main(argv=None):
     with YoutubeDL(ydl_opts) as ydl:
         # Update version
         if opts.update_self:
-            update_self(ydl.to_screen, opts.verbose, ydl._opener)
+            parser.error(
+                "youtube-dl's self-update mechanism is disabled on Debian.\n"
+                "Please update youtube-dl using apt(8).\n"
+                "See https://packages.debian.org/sid/youtube-dl for the "
+                "latest packaged version.\n"
+            )
 
         # Remove cache dir
         if opts.rm_cachedir:
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/ard.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/ard.py
@@ -249,7 +249,7 @@ class ARDMediathekIE(ARDMediathekBaseIE)
 
 
 class ARDIE(InfoExtractor):
-    _VALID_URL = r'(?P<mainurl>https?://(www\.)?daserste\.de/[^?#]+/videos(?:extern)?/(?P<display_id>[^/?#]+)-(?P<id>[0-9]+))\.html'
+    _VALID_URL = r'(?P<mainurl>https?://(www\.)?daserste\.de/[^?#]+/videos/(?P<display_id>[^/?#]+)-(?P<id>[0-9]+))\.html'
     _TESTS = [{
         # available till 14.02.2019
         'url': 'http://www.daserste.de/information/talk/maischberger/videos/das-groko-drama-zerlegen-sich-die-volksparteien-video-102.html',
@@ -264,9 +264,6 @@ class ARDIE(InfoExtractor):
             'thumbnail': r're:^https?://.*\.jpg$',
         },
     }, {
-        'url': 'https://www.daserste.de/information/reportage-dokumentation/erlebnis-erde/videosextern/woelfe-und-herdenschutzhunde-ungleiche-brueder-102.html',
-        'only_matching': True,
-    }, {
         'url': 'http://www.daserste.de/information/reportage-dokumentation/dokus/videos/die-story-im-ersten-mission-unter-falscher-flagge-100.html',
         'only_matching': True,
     }]
@@ -313,9 +310,9 @@ class ARDIE(InfoExtractor):
 
 
 class ARDBetaMediathekIE(ARDMediathekBaseIE):
-    _VALID_URL = r'https://(?:(?:beta|www)\.)?ardmediathek\.de/(?P<client>[^/]+)/(?:player|live|video)/(?P<display_id>(?:[^/]+/)*)(?P<video_id>[a-zA-Z0-9]+)'
+    _VALID_URL = r'https://(?:beta|www)\.ardmediathek\.de/(?P<client>[^/]+)/(?:player|live)/(?P<video_id>[a-zA-Z0-9]+)(?:/(?P<display_id>[^/?#]+))?'
     _TESTS = [{
-        'url': 'https://ardmediathek.de/ard/video/die-robuste-roswita/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE',
+        'url': 'https://beta.ardmediathek.de/ard/player/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE/die-robuste-roswita',
         'md5': 'dfdc87d2e7e09d073d5a80770a9ce88f',
         'info_dict': {
             'display_id': 'die-robuste-roswita',
@@ -329,15 +326,6 @@ class ARDBetaMediathekIE(ARDMediathekBas
             'ext': 'mp4',
         },
     }, {
-        'url': 'https://beta.ardmediathek.de/ard/video/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE',
-        'only_matching': True,
-    }, {
-        'url': 'https://ardmediathek.de/ard/video/saartalk/saartalk-gesellschaftsgift-haltung-gegen-hass/sr-fernsehen/Y3JpZDovL3NyLW9ubGluZS5kZS9TVF84MTY4MA/',
-        'only_matching': True,
-    }, {
-        'url': 'https://www.ardmediathek.de/ard/video/trailer/private-eyes-s01-e01/one/Y3JpZDovL3dkci5kZS9CZWl0cmFnLTE1MTgwYzczLWNiMTEtNGNkMS1iMjUyLTg5MGYzOWQxZmQ1YQ/',
-        'only_matching': True,
-    }, {
         'url': 'https://www.ardmediathek.de/ard/player/Y3JpZDovL3N3ci5kZS9hZXgvbzEwNzE5MTU/',
         'only_matching': True,
     }, {
@@ -348,11 +336,7 @@ class ARDBetaMediathekIE(ARDMediathekBas
     def _real_extract(self, url):
         mobj = re.match(self._VALID_URL, url)
         video_id = mobj.group('video_id')
-        display_id = mobj.group('display_id')
-        if display_id:
-            display_id = display_id.rstrip('/')
-        if not display_id:
-            display_id = video_id
+        display_id = mobj.group('display_id') or video_id
 
         player_page = self._download_json(
             'https://api.ardmediathek.de/public-gateway',
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/bbc.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/bbc.py
@@ -528,7 +528,7 @@ class BBCCoUkIE(InfoExtractor):
 
             def get_programme_id(item):
                 def get_from_attributes(item):
-                    for p in ('identifier', 'group'):
+                    for p in('identifier', 'group'):
                         value = item.get(p)
                         if value and re.match(r'^[pb][\da-z]{7}$', value):
                             return value
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/brightcove.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/brightcove.py
@@ -5,34 +5,32 @@ import base64
 import re
 import struct
 
-from .adobepass import AdobePassIE
 from .common import InfoExtractor
+from .adobepass import AdobePassIE
 from ..compat import (
     compat_etree_fromstring,
-    compat_HTTPError,
     compat_parse_qs,
     compat_urllib_parse_urlparse,
     compat_urlparse,
     compat_xml_parse_error,
+    compat_HTTPError,
 )
 from ..utils import (
-    clean_html,
-    extract_attributes,
     ExtractorError,
+    extract_attributes,
     find_xpath_attr,
     fix_xml_ampersands,
     float_or_none,
-    int_or_none,
     js_to_json,
-    mimetype2ext,
+    int_or_none,
     parse_iso8601,
     smuggle_url,
-    str_or_none,
     unescapeHTML,
     unsmuggle_url,
-    UnsupportedError,
     update_url_query,
-    url_or_none,
+    clean_html,
+    mimetype2ext,
+    UnsupportedError,
 )
 
 
@@ -555,16 +553,10 @@ class BrightcoveNewIE(AdobePassIE):
 
         subtitles = {}
         for text_track in json_data.get('text_tracks', []):
-            if text_track.get('kind') != 'captions':
-                continue
-            text_track_url = url_or_none(text_track.get('src'))
-            if not text_track_url:
-                continue
-            lang = (str_or_none(text_track.get('srclang'))
-                    or str_or_none(text_track.get('label')) or 'en').lower()
-            subtitles.setdefault(lang, []).append({
-                'url': text_track_url,
-            })
+            if text_track.get('src'):
+                subtitles.setdefault(text_track.get('srclang'), []).append({
+                    'url': text_track['src'],
+                })
 
         is_live = False
         duration = float_or_none(json_data.get('duration'), 1000)
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/giantbomb.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/giantbomb.py
@@ -13,10 +13,10 @@ from ..utils import (
 
 
 class GiantBombIE(InfoExtractor):
-    _VALID_URL = r'https?://(?:www\.)?giantbomb\.com/(?:videos|shows)/(?P<display_id>[^/]+)/(?P<id>\d+-\d+)'
-    _TESTS = [{
+    _VALID_URL = r'https?://(?:www\.)?giantbomb\.com/videos/(?P<display_id>[^/]+)/(?P<id>\d+-\d+)'
+    _TEST = {
         'url': 'http://www.giantbomb.com/videos/quick-look-destiny-the-dark-below/2300-9782/',
-        'md5': '132f5a803e7e0ab0e274d84bda1e77ae',
+        'md5': 'c8ea694254a59246a42831155dec57ac',
         'info_dict': {
             'id': '2300-9782',
             'display_id': 'quick-look-destiny-the-dark-below',
@@ -26,10 +26,7 @@ class GiantBombIE(InfoExtractor):
             'duration': 2399,
             'thumbnail': r're:^https?://.*\.jpg$',
         }
-    }, {
-        'url': 'https://www.giantbomb.com/shows/ben-stranding/2970-20212',
-        'only_matching': True,
-    }]
+    }
 
     def _real_extract(self, url):
         mobj = re.match(self._VALID_URL, url)
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/indavideo.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/indavideo.py
@@ -58,7 +58,7 @@ class IndavideoEmbedIE(InfoExtractor):
         video_id = self._match_id(url)
 
         video = self._download_json(
-            'https://amfphp.indavideo.hu/SYm0json.php/player.playerHandler.getVideoData/%s' % video_id,
+            'http://amfphp.indavideo.hu/SYm0json.php/player.playerHandler.getVideoData/%s' % video_id,
             video_id)['data']
 
         title = video['title']
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/jwplatform.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/jwplatform.py
@@ -4,7 +4,6 @@ from __future__ import unicode_literals
 import re
 
 from .common import InfoExtractor
-from ..utils import unsmuggle_url
 
 
 class JWPlatformIE(InfoExtractor):
@@ -33,14 +32,10 @@ class JWPlatformIE(InfoExtractor):
     @staticmethod
     def _extract_urls(webpage):
         return re.findall(
-            r'<(?:script|iframe)[^>]+?src=["\']((?:https?:)?//(?:content\.jwplatform|cdn\.jwplayer)\.com/players/[a-zA-Z0-9]{8})',
+            r'<(?:script|iframe)[^>]+?src=["\']((?:https?:)?//content\.jwplatform\.com/players/[a-zA-Z0-9]{8})',
             webpage)
 
     def _real_extract(self, url):
-        url, smuggled_data = unsmuggle_url(url, {})
-        self._initialize_geo_bypass({
-            'countries': smuggled_data.get('geo_countries'),
-        })
         video_id = self._match_id(url)
         json_data = self._download_json('https://cdn.jwplayer.com/v2/media/' + video_id, video_id)
         return self._parse_jwplayer_data(json_data, video_id)
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/mailru.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/mailru.py
@@ -128,12 +128,6 @@ class MailRuIE(InfoExtractor):
                 'http://api.video.mail.ru/videos/%s.json?new=1' % video_id,
                 video_id, 'Downloading video JSON')
 
-        headers = {}
-
-        video_key = self._get_cookies('https://my.mail.ru').get('video_key')
-        if video_key:
-            headers['Cookie'] = 'video_key=%s' % video_key.value
-
         formats = []
         for f in video_data['videos']:
             video_url = f.get('url')
@@ -146,7 +140,6 @@ class MailRuIE(InfoExtractor):
                 'url': video_url,
                 'format_id': format_id,
                 'height': height,
-                'http_headers': headers,
             })
         self._sort_formats(formats)
 
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/malltv.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/malltv.py
@@ -8,7 +8,7 @@ from ..utils import merge_dicts
 
 
 class MallTVIE(InfoExtractor):
-    _VALID_URL = r'https?://(?:(?:www|sk)\.)?mall\.tv/(?:[^/]+/)*(?P<id>[^/?#&]+)'
+    _VALID_URL = r'https?://(?:www\.)?mall\.tv/(?:[^/]+/)*(?P<id>[^/?#&]+)'
     _TESTS = [{
         'url': 'https://www.mall.tv/18-miliard-pro-neziskovky-opravdu-jsou-sportovci-nebo-clovek-v-tisni-pijavice',
         'md5': '1c4a37f080e1f3023103a7b43458e518',
@@ -26,9 +26,6 @@ class MallTVIE(InfoExtractor):
     }, {
         'url': 'https://www.mall.tv/kdo-to-plati/18-miliard-pro-neziskovky-opravdu-jsou-sportovci-nebo-clovek-v-tisni-pijavice',
         'only_matching': True,
-    }, {
-        'url': 'https://sk.mall.tv/gejmhaus/reklamacia-nehreje-vyrobnik-tepla-alebo-spekacka',
-        'only_matching': True,
     }]
 
     def _real_extract(self, url):
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/periscope.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/periscope.py
@@ -18,7 +18,7 @@ class PeriscopeBaseIE(InfoExtractor):
             item_id, query=query)
 
     def _parse_broadcast_data(self, broadcast, video_id):
-        title = broadcast.get('status') or 'Periscope Broadcast'
+        title = broadcast['status']
         uploader = broadcast.get('user_display_name') or broadcast.get('username')
         title = '%s - %s' % (uploader, title) if uploader else title
         is_live = broadcast.get('state').lower() == 'running'
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/redtube.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/redtube.py
@@ -4,7 +4,6 @@ import re
 
 from .common import InfoExtractor
 from ..utils import (
-    determine_ext,
     ExtractorError,
     int_or_none,
     merge_dicts,
@@ -58,7 +57,7 @@ class RedTubeIE(InfoExtractor):
 
         if not info.get('title'):
             info['title'] = self._html_search_regex(
-                (r'<h(\d)[^>]+class="(?:video_title_text|videoTitle|video_title)[^"]*">(?P<title>(?:(?!\1).)+)</h\1>',
+                (r'<h(\d)[^>]+class="(?:video_title_text|videoTitle)[^"]*">(?P<title>(?:(?!\1).)+)</h\1>',
                  r'(?:videoTitle|title)\s*:\s*(["\'])(?P<title>(?:(?!\1).)+)\1',),
                 webpage, 'title', group='title',
                 default=None) or self._og_search_title(webpage)
@@ -78,7 +77,7 @@ class RedTubeIE(InfoExtractor):
                     })
         medias = self._parse_json(
             self._search_regex(
-                r'mediaDefinition["\']?\s*:\s*(\[.+?}\s*\])', webpage,
+                r'mediaDefinition\s*:\s*(\[.+?\])', webpage,
                 'media definitions', default='{}'),
             video_id, fatal=False)
         if medias and isinstance(medias, list):
@@ -86,12 +85,6 @@ class RedTubeIE(InfoExtractor):
                 format_url = url_or_none(media.get('videoUrl'))
                 if not format_url:
                     continue
-                if media.get('format') == 'hls' or determine_ext(format_url) == 'm3u8':
-                    formats.extend(self._extract_m3u8_formats(
-                        format_url, video_id, 'mp4',
-                        entry_protocol='m3u8_native', m3u8_id='hls',
-                        fatal=False))
-                    continue
                 format_id = media.get('quality')
                 formats.append({
                     'url': format_url,
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/soundcloud.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/soundcloud.py
@@ -559,7 +559,7 @@ class SoundcloudSetIE(SoundcloudPlaylist
 class SoundcloudPagedPlaylistBaseIE(SoundcloudIE):
     def _extract_playlist(self, base_url, playlist_id, playlist_title):
         COMMON_QUERY = {
-            'limit': 80000,
+            'limit': 2000000000,
             'linked_partitioning': '1',
         }
 
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/spike.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/spike.py
@@ -8,10 +8,15 @@ class BellatorIE(MTVServicesInfoExtracto
     _TESTS = [{
         'url': 'http://www.bellator.com/fight/atwr7k/bellator-158-michael-page-vs-evangelista-cyborg',
         'info_dict': {
-            'title': 'Michael Page vs. Evangelista Cyborg',
-            'description': 'md5:0d917fc00ffd72dd92814963fc6cbb05',
+            'id': 'b55e434e-fde1-4a98-b7cc-92003a034de4',
+            'ext': 'mp4',
+            'title': 'Douglas Lima vs. Paul Daley - Round 1',
+            'description': 'md5:805a8dd29310fd611d32baba2f767885',
+        },
+        'params': {
+            # m3u8 download
+            'skip_download': True,
         },
-        'playlist_count': 3,
     }, {
         'url': 'http://www.bellator.com/video-clips/bw6k7n/bellator-158-foundations-michael-venom-page',
         'only_matching': True,
@@ -20,9 +25,6 @@ class BellatorIE(MTVServicesInfoExtracto
     _FEED_URL = 'http://www.bellator.com/feeds/mrss/'
     _GEO_COUNTRIES = ['US']
 
-    def _extract_mgid(self, webpage):
-        return self._extract_triforce_mgid(webpage)
-
 
 class ParamountNetworkIE(MTVServicesInfoExtractor):
     _VALID_URL = r'https?://(?:www\.)?paramountnetwork\.com/[^/]+/[\da-z]{6}(?:[/?#&]|$)'
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/tele5.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/tele5.py
@@ -6,16 +6,18 @@ import re
 from .common import InfoExtractor
 from .jwplatform import JWPlatformIE
 from .nexx import NexxIE
-from ..compat import compat_urlparse
+from ..compat import (
+    compat_str,
+    compat_urlparse,
+)
 from ..utils import (
     NO_DEFAULT,
-    smuggle_url,
+    try_get,
 )
 
 
 class Tele5IE(InfoExtractor):
     _VALID_URL = r'https?://(?:www\.)?tele5\.de/(?:[^/]+/)*(?P<id>[^/?#&]+)'
-    _GEO_COUNTRIES = ['DE']
     _TESTS = [{
         'url': 'https://www.tele5.de/mediathek/filme-online/videos?vid=1549416',
         'info_dict': {
@@ -29,21 +31,6 @@ class Tele5IE(InfoExtractor):
             'skip_download': True,
         },
     }, {
-        # jwplatform, nexx unavailable
-        'url': 'https://www.tele5.de/filme/ghoul-das-geheimnis-des-friedhofmonsters/',
-        'info_dict': {
-            'id': 'WJuiOlUp',
-            'ext': 'mp4',
-            'upload_date': '20200603',
-            'timestamp': 1591214400,
-            'title': 'Ghoul - Das Geheimnis des Friedhofmonsters',
-            'description': 'md5:42002af1d887ff3d5b2b3ca1f8137d97',
-        },
-        'params': {
-            'skip_download': True,
-        },
-        'add_ie': [JWPlatformIE.ie_key()],
-    }, {
         'url': 'https://www.tele5.de/kalkofes-mattscheibe/video-clips/politik-und-gesellschaft?ve_id=1551191',
         'only_matching': True,
     }, {
@@ -101,8 +88,15 @@ class Tele5IE(InfoExtractor):
             if not jwplatform_id:
                 jwplatform_id = extract_id(JWPLATFORM_ID_RE, 'jwplatform id')
 
+            media = self._download_json(
+                'https://cdn.jwplayer.com/v2/media/' + jwplatform_id,
+                display_id)
+            nexx_id = try_get(
+                media, lambda x: x['playlist'][0]['nexx_id'], compat_str)
+
+            if nexx_id:
+                return nexx_result(nexx_id)
+
         return self.url_result(
-            smuggle_url(
-                'jwplatform:%s' % jwplatform_id,
-                {'geo_countries': self._GEO_COUNTRIES}),
-            ie=JWPlatformIE.ie_key(), video_id=jwplatform_id)
+            'jwplatform:%s' % jwplatform_id, ie=JWPlatformIE.ie_key(),
+            video_id=jwplatform_id)
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/twitch.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/twitch.py
@@ -21,8 +21,6 @@ from ..utils import (
     orderedSet,
     parse_duration,
     parse_iso8601,
-    qualities,
-    str_or_none,
     try_get,
     unified_timestamp,
     update_url_query,
@@ -52,14 +50,8 @@ class TwitchBaseIE(InfoExtractor):
 
     def _call_api(self, path, item_id, *args, **kwargs):
         headers = kwargs.get('headers', {}).copy()
-        headers.update({
-            'Accept': 'application/vnd.twitchtv.v5+json; charset=UTF-8',
-            'Client-ID': self._CLIENT_ID,
-        })
-        kwargs.update({
-            'headers': headers,
-            'expected_status': (400, 410),
-        })
+        headers['Client-ID'] = self._CLIENT_ID
+        kwargs['headers'] = headers
         response = self._download_json(
             '%s/%s' % (self._API_BASE, path), item_id,
             *args, **compat_kwargs(kwargs))
@@ -194,27 +186,12 @@ class TwitchItemBaseIE(TwitchBaseIE):
             is_live = False
         else:
             is_live = None
-        _QUALITIES = ('small', 'medium', 'large')
-        quality_key = qualities(_QUALITIES)
-        thumbnails = []
-        preview = info.get('preview')
-        if isinstance(preview, dict):
-            for thumbnail_id, thumbnail_url in preview.items():
-                thumbnail_url = url_or_none(thumbnail_url)
-                if not thumbnail_url:
-                    continue
-                if thumbnail_id not in _QUALITIES:
-                    continue
-                thumbnails.append({
-                    'url': thumbnail_url,
-                    'preference': quality_key(thumbnail_id),
-                })
         return {
             'id': info['_id'],
             'title': info.get('title') or 'Untitled Broadcast',
             'description': info.get('description'),
             'duration': int_or_none(info.get('length')),
-            'thumbnails': thumbnails,
+            'thumbnail': info.get('preview'),
             'uploader': info.get('channel', {}).get('display_name'),
             'uploader_id': info.get('channel', {}).get('name'),
             'timestamp': parse_iso8601(info.get('recorded_at')),
@@ -595,18 +572,10 @@ class TwitchStreamIE(TwitchBaseIE):
                 else super(TwitchStreamIE, cls).suitable(url))
 
     def _real_extract(self, url):
-        channel_name = self._match_id(url)
-
-        access_token = self._call_api(
-            'api/channels/%s/access_token' % channel_name, channel_name,
-            'Downloading access token JSON')
-
-        token = access_token['token']
-        channel_id = compat_str(self._parse_json(
-            token, channel_name)['channel_id'])
+        channel_id = self._match_id(url)
 
         stream = self._call_api(
-            'kraken/streams/%s?stream_type=all' % channel_id,
+            'kraken/streams/%s?stream_type=all' % channel_id.lower(),
             channel_id, 'Downloading stream JSON').get('stream')
 
         if not stream:
@@ -616,9 +585,11 @@ class TwitchStreamIE(TwitchBaseIE):
         # (e.g. http://www.twitch.tv/TWITCHPLAYSPOKEMON) that will lead to constructing
         # an invalid m3u8 URL. Working around by use of original channel name from stream
         # JSON and fallback to lowercase if it's not available.
-        channel_name = try_get(
-            stream, lambda x: x['channel']['name'],
-            compat_str) or channel_name.lower()
+        channel_id = stream.get('channel', {}).get('name') or channel_id.lower()
+
+        access_token = self._call_api(
+            'api/channels/%s/access_token' % channel_id, channel_id,
+            'Downloading channel access token')
 
         query = {
             'allow_source': 'true',
@@ -629,11 +600,11 @@ class TwitchStreamIE(TwitchBaseIE):
             'playlist_include_framerate': 'true',
             'segment_preference': '4',
             'sig': access_token['sig'].encode('utf-8'),
-            'token': token.encode('utf-8'),
+            'token': access_token['token'].encode('utf-8'),
         }
         formats = self._extract_m3u8_formats(
             '%s/api/channel/hls/%s.m3u8?%s'
-            % (self._USHER_BASE, channel_name, compat_urllib_parse_urlencode(query)),
+            % (self._USHER_BASE, channel_id, compat_urllib_parse_urlencode(query)),
             channel_id, 'mp4')
         self._prefer_source(formats)
 
@@ -656,8 +627,8 @@ class TwitchStreamIE(TwitchBaseIE):
             })
 
         return {
-            'id': str_or_none(stream.get('_id')) or channel_id,
-            'display_id': channel_name,
+            'id': compat_str(stream['_id']),
+            'display_id': channel_id,
             'title': title,
             'description': description,
             'thumbnails': thumbnails,
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/twitter.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/twitter.py
@@ -578,18 +578,6 @@ class TwitterBroadcastIE(TwitterBaseIE,
     IE_NAME = 'twitter:broadcast'
     _VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/broadcasts/(?P<id>[0-9a-zA-Z]{13})'
 
-    _TEST = {
-        # untitled Periscope video
-        'url': 'https://twitter.com/i/broadcasts/1yNGaQLWpejGj',
-        'info_dict': {
-            'id': '1yNGaQLWpejGj',
-            'ext': 'mp4',
-            'title': 'Andrea May Sahouri - Periscope Broadcast',
-            'uploader': 'Andrea May Sahouri',
-            'uploader_id': '1PXEdBZWpGwKe',
-        },
-    }
-
     def _real_extract(self, url):
         broadcast_id = self._match_id(url)
         broadcast = self._call_api(
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/vimeo.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/vimeo.py
@@ -140,28 +140,28 @@ class VimeoBaseInfoExtractor(InfoExtract
             })
 
         # TODO: fix handling of 308 status code returned for live archive manifest requests
-        sep_pattern = r'/sep/video/'
         for files_type in ('hls', 'dash'):
             for cdn_name, cdn_data in config_files.get(files_type, {}).get('cdns', {}).items():
                 manifest_url = cdn_data.get('url')
                 if not manifest_url:
                     continue
                 format_id = '%s-%s' % (files_type, cdn_name)
-                sep_manifest_urls = []
-                if re.search(sep_pattern, manifest_url):
-                    for suffix, repl in (('', 'video'), ('_sep', 'sep/video')):
-                        sep_manifest_urls.append((format_id + suffix, re.sub(
-                            sep_pattern, '/%s/' % repl, manifest_url)))
-                else:
-                    sep_manifest_urls = [(format_id, manifest_url)]
-                for f_id, m_url in sep_manifest_urls:
-                    if files_type == 'hls':
-                        formats.extend(self._extract_m3u8_formats(
-                            m_url, video_id, 'mp4',
-                            'm3u8' if is_live else 'm3u8_native', m3u8_id=f_id,
-                            note='Downloading %s m3u8 information' % cdn_name,
-                            fatal=False))
-                    elif files_type == 'dash':
+                if files_type == 'hls':
+                    formats.extend(self._extract_m3u8_formats(
+                        manifest_url, video_id, 'mp4',
+                        'm3u8' if is_live else 'm3u8_native', m3u8_id=format_id,
+                        note='Downloading %s m3u8 information' % cdn_name,
+                        fatal=False))
+                elif files_type == 'dash':
+                    mpd_pattern = r'/%s/(?:sep/)?video/' % video_id
+                    mpd_manifest_urls = []
+                    if re.search(mpd_pattern, manifest_url):
+                        for suffix, repl in (('', 'video'), ('_sep', 'sep/video')):
+                            mpd_manifest_urls.append((format_id + suffix, re.sub(
+                                mpd_pattern, '/%s/%s/' % (video_id, repl), manifest_url)))
+                    else:
+                        mpd_manifest_urls = [(format_id, manifest_url)]
+                    for f_id, m_url in mpd_manifest_urls:
                         if 'json=1' in m_url:
                             real_m_url = (self._download_json(m_url, video_id, fatal=False) or {}).get('url')
                             if real_m_url:
@@ -170,6 +170,11 @@ class VimeoBaseInfoExtractor(InfoExtract
                             m_url.replace('/master.json', '/master.mpd'), video_id, f_id,
                             'Downloading %s MPD information' % cdn_name,
                             fatal=False)
+                        for f in mpd_formats:
+                            if f.get('vcodec') == 'none':
+                                f['preference'] = -50
+                            elif f.get('acodec') == 'none':
+                                f['preference'] = -40
                         formats.extend(mpd_formats)
 
         live_archive = live_event.get('archive') or {}
@@ -181,12 +186,6 @@ class VimeoBaseInfoExtractor(InfoExtract
                 'preference': 1,
             })
 
-        for f in formats:
-            if f.get('vcodec') == 'none':
-                f['preference'] = -50
-            elif f.get('acodec') == 'none':
-                f['preference'] = -40
-
         subtitles = {}
         text_tracks = config['request'].get('text_tracks')
         if text_tracks:
--- youtube-dl-2020.06.06.orig/youtube_dl/extractor/youtube.py
+++ youtube-dl-2020.06.06/youtube_dl/extractor/youtube.py
@@ -388,15 +388,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor
                             (?:www\.)?invidious\.drycat\.fr/|
                             (?:www\.)?tube\.poal\.co/|
                             (?:www\.)?vid\.wxzm\.sx/|
-                            (?:www\.)?yewtu\.be/|
                             (?:www\.)?yt\.elukerio\.org/|
                             (?:www\.)?yt\.lelux\.fi/|
-                            (?:www\.)?invidious\.ggc-project\.de/|
-                            (?:www\.)?yt\.maisputain\.ovh/|
-                            (?:www\.)?invidious\.13ad\.de/|
-                            (?:www\.)?invidious\.toot\.koeln/|
-                            (?:www\.)?invidious\.fdn\.fr/|
-                            (?:www\.)?watch\.nettohikari\.com/|
                             (?:www\.)?kgg2m7yk5aybusll\.onion/|
                             (?:www\.)?qklhadlycap4cnod\.onion/|
                             (?:www\.)?axqzx4s6s54s32yentfqojs3x5i7faxza6xo3ehd4bzzsg2ii4fv2iid\.onion/|
@@ -404,7 +397,6 @@ class YoutubeIE(YoutubeBaseInfoExtractor
                             (?:www\.)?fz253lmuao3strwbfbmx46yu7acac2jz27iwtorgmbqlkurlclmancad\.onion/|
                             (?:www\.)?invidious\.l4qlywnpwqsluw65ts7md3khrivpirse744un3x7mlskqauz5pyuzgqd\.onion/|
                             (?:www\.)?owxfohz4kjyv25fvlqilyxast7inivgiktls3th44jhk3ej3i7ya\.b32\.i2p/|
-                            (?:www\.)?4l2dgddgsrkf2ous66i6seeyi6etzfgrue332grh2n7madpwopotugyd\.onion/|
                             youtube\.googleapis\.com/)                        # the various hostnames, with wildcard subdomains
                          (?:.*?\#/)?                                          # handle anchor (#/) redirect urls
                          (?:                                                  # the various things that can precede the ID:
--- youtube-dl-2020.06.06.orig/youtube_dl/options.py
+++ youtube-dl-2020.06.06/youtube_dl/options.py
@@ -140,7 +140,7 @@ def parseOpts(overrideArguments=None):
     general.add_option(
         '-U', '--update',
         action='store_true', dest='update_self',
-        help='Update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed)')
+        help=optparse.SUPPRESS_HELP)
     general.add_option(
         '-i', '--ignore-errors',
         action='store_true', dest='ignoreerrors', default=False,
--- youtube-dl-2020.06.06.orig/youtube_dl/postprocessor/ffmpeg.py
+++ youtube-dl-2020.06.06/youtube_dl/postprocessor/ffmpeg.py
@@ -447,13 +447,6 @@ class FFmpegMetadataPP(FFmpegPostProcess
                         metadata[meta_f] = info[info_f]
                     break
 
-        # See [1-4] for some info on media metadata/metadata supported
-        # by ffmpeg.
-        # 1. https://kdenlive.org/en/project/adding-meta-data-to-mp4-video/
-        # 2. https://wiki.multimedia.cx/index.php/FFmpeg_Metadata
-        # 3. https://kodi.wiki/view/Video_file_tagging
-        # 4. http://atomicparsley.sourceforge.net/mpeg-4files.html
-
         add('title', ('track', 'title'))
         add('date', 'upload_date')
         add(('description', 'comment'), 'description')
@@ -464,10 +457,6 @@ class FFmpegMetadataPP(FFmpegPostProcess
         add('album')
         add('album_artist')
         add('disc', 'disc_number')
-        add('show', 'series')
-        add('season_number')
-        add('episode_id', ('episode', 'episode_id'))
-        add('episode_sort', 'episode_number')
 
         if not metadata:
             self._downloader.to_screen('[ffmpeg] There isn\'t any metadata to add')
--- youtube-dl-2020.06.06.orig/youtube_dl/utils.py
+++ youtube-dl-2020.06.06/youtube_dl/utils.py
@@ -1837,12 +1837,6 @@ def write_json_file(obj, fn):
                 os.unlink(fn)
             except OSError:
                 pass
-        try:
-            mask = os.umask(0)
-            os.umask(mask)
-            os.chmod(tf.name, 0o666 & ~mask)
-        except OSError:
-            pass
         os.rename(tf.name, fn)
     except Exception:
         try:
--- youtube-dl-2020.06.06.orig/youtube_dl/version.py
+++ youtube-dl-2020.06.06/youtube_dl/version.py
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2020.06.06'
+__version__ = '2020.05.08'
