mirror of
https://codeberg.org/polarisfm/youtube-dl
synced 2024-12-01 20:57:54 +01:00
Merge remote-tracking branch 'upstream/master' into fix-zing-mp3
This commit is contained in:
commit
fdf6405f03
6
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
6
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
@ -18,7 +18,7 @@ title: ''
|
||||
|
||||
<!--
|
||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||
@ -26,7 +26,7 @@ Carefully read and work through this check list in order to prevent the most com
|
||||
-->
|
||||
|
||||
- [ ] I'm reporting a broken site support
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.06.08**
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.07.02**
|
||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
||||
- [ ] I've searched the bugtracker for similar issues including closed ones
|
||||
@ -41,7 +41,7 @@ Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <
|
||||
[debug] User config: []
|
||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
||||
[debug] youtube-dl version 2019.06.08
|
||||
[debug] youtube-dl version 2019.07.02
|
||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
||||
[debug] Proxy map: {}
|
||||
|
@ -19,7 +19,7 @@ labels: 'site-support-request'
|
||||
|
||||
<!--
|
||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
||||
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
||||
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||
@ -27,7 +27,7 @@ Carefully read and work through this check list in order to prevent the most com
|
||||
-->
|
||||
|
||||
- [ ] I'm reporting a new site support request
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.06.08**
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.07.02**
|
||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
||||
- [ ] I've checked that none of provided URLs violate any copyrights
|
||||
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
||||
|
@ -18,13 +18,13 @@ title: ''
|
||||
|
||||
<!--
|
||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||
- Finally, put x into all relevant boxes (like this [x])
|
||||
-->
|
||||
|
||||
- [ ] I'm reporting a site feature request
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.06.08**
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.07.02**
|
||||
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
||||
|
||||
|
||||
|
6
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
6
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
@ -18,7 +18,7 @@ title: ''
|
||||
|
||||
<!--
|
||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||
@ -27,7 +27,7 @@ Carefully read and work through this check list in order to prevent the most com
|
||||
-->
|
||||
|
||||
- [ ] I'm reporting a broken site support issue
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.06.08**
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.07.02**
|
||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
||||
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
||||
@ -43,7 +43,7 @@ Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <
|
||||
[debug] User config: []
|
||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
||||
[debug] youtube-dl version 2019.06.08
|
||||
[debug] youtube-dl version 2019.07.02
|
||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
||||
[debug] Proxy map: {}
|
||||
|
4
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
4
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
@ -19,13 +19,13 @@ labels: 'request'
|
||||
|
||||
<!--
|
||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.07.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||
- Finally, put x into all relevant boxes (like this [x])
|
||||
-->
|
||||
|
||||
- [ ] I'm reporting a feature request
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.06.08**
|
||||
- [ ] I've verified that I'm running youtube-dl version **2019.07.02**
|
||||
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
||||
|
||||
|
||||
|
53
ChangeLog
53
ChangeLog
@ -1,3 +1,56 @@
|
||||
version 2019.07.02
|
||||
|
||||
Core
|
||||
+ [utils] Introduce random_user_agent and use as default User-Agent (#21546)
|
||||
|
||||
Extractors
|
||||
+ [vevo] Add support for embed.vevo.com URLs (#21565)
|
||||
+ [openload] Add support for oload.biz (#21574)
|
||||
* [xiami] Update API base URL (#21575)
|
||||
* [yourporn] Fix extraction (#21585)
|
||||
+ [acast] Add support for URLs with episode id (#21444)
|
||||
+ [dailymotion] Add support for DM.player embeds
|
||||
* [soundcloud] Update client id
|
||||
|
||||
|
||||
version 2019.06.27
|
||||
|
||||
Extractors
|
||||
+ [go] Add support for disneynow.com (#21528)
|
||||
* [mixer:vod] Relax URL regular expression (#21531, #21536)
|
||||
* [drtv] Relax URL regular expression
|
||||
* [fusion] Fix extraction (#17775, #21269)
|
||||
- [nfb] Remove extractor (#21518)
|
||||
+ [beeg] Add support for api/v6 v2 URLs (#21511)
|
||||
+ [brightcove:new] Add support for playlists (#21331)
|
||||
+ [openload] Add support for oload.life (#21495)
|
||||
* [vimeo:channel,group] Make title extraction non fatal
|
||||
* [vimeo:likes] Implement extrator in terms of channel extractor (#21493)
|
||||
+ [pornhub] Add support for more paged video sources
|
||||
+ [pornhub] Add support for downloading single pages and search pages (#15570)
|
||||
* [pornhub] Rework extractors (#11922, #16078, #17454, #17936)
|
||||
+ [youtube] Add another signature function pattern
|
||||
* [tf1] Fix extraction (#21365, #21372)
|
||||
* [crunchyroll] Move Accept-Language workaround to video extractor since
|
||||
it causes playlists not to list any videos
|
||||
* [crunchyroll:playlist] Fix and relax title extraction (#21291, #21443)
|
||||
|
||||
|
||||
version 2019.06.21
|
||||
|
||||
Core
|
||||
* [utils] Restrict parse_codecs and add theora as known vcodec (#21381)
|
||||
|
||||
Extractors
|
||||
* [youtube] Update signature function patterns (#21469, #21476)
|
||||
* [youtube] Make --write-annotations non fatal (#21452)
|
||||
+ [sixplay] Add support for rtlmost.hu (#21405)
|
||||
* [youtube] Hardcode codec metadata for av01 video only formats (#21381)
|
||||
* [toutv] Update client key (#21370)
|
||||
+ [biqle] Add support for new embed domain
|
||||
* [cbs] Improve DRM protected videos detection (#21339)
|
||||
|
||||
|
||||
version 2019.06.08
|
||||
|
||||
Core
|
||||
|
@ -581,7 +581,6 @@
|
||||
- **NextTV**: 壹電視
|
||||
- **Nexx**
|
||||
- **NexxEmbed**
|
||||
- **nfb**: National Film Board of Canada
|
||||
- **nfl.com**
|
||||
- **NhkVod**
|
||||
- **nhl.com**
|
||||
@ -692,8 +691,9 @@
|
||||
- **PornerBros**
|
||||
- **PornHd**
|
||||
- **PornHub**: PornHub and Thumbzilla
|
||||
- **PornHubPlaylist**
|
||||
- **PornHubUserVideos**
|
||||
- **PornHubPagedVideoList**
|
||||
- **PornHubUser**
|
||||
- **PornHubUserVideosUpload**
|
||||
- **Pornotube**
|
||||
- **PornoVoisines**
|
||||
- **PornoXO**
|
||||
|
@ -822,6 +822,15 @@ class TestUtil(unittest.TestCase):
|
||||
'vcodec': 'av01.0.05M.08',
|
||||
'acodec': 'none',
|
||||
})
|
||||
self.assertEqual(parse_codecs('theora, vorbis'), {
|
||||
'vcodec': 'theora',
|
||||
'acodec': 'vorbis',
|
||||
})
|
||||
self.assertEqual(parse_codecs('unknownvcodec, unknownacodec'), {
|
||||
'vcodec': 'unknownvcodec',
|
||||
'acodec': 'unknownacodec',
|
||||
})
|
||||
self.assertEqual(parse_codecs('unknown'), {})
|
||||
|
||||
def test_escape_rfc3986(self):
|
||||
reserved = "!*'();:@&=+$,/?#[]"
|
||||
|
@ -7,6 +7,7 @@ import functools
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
try_get,
|
||||
@ -27,7 +28,7 @@ class ACastIE(InfoExtractor):
|
||||
'''
|
||||
_TESTS = [{
|
||||
'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
|
||||
'md5': 'a02393c74f3bdb1801c3ec2695577ce0',
|
||||
'md5': '16d936099ec5ca2d5869e3a813ee8dc4',
|
||||
'info_dict': {
|
||||
'id': '2a92b283-1a75-4ad8-8396-499c641de0d9',
|
||||
'ext': 'mp3',
|
||||
@ -46,28 +47,37 @@ class ACastIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'https://play.acast.com/s/rattegangspodden/s04e09-styckmordet-i-helenelund-del-22',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://play.acast.com/s/sparpodcast/2a92b283-1a75-4ad8-8396-499c641de0d9',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
channel, display_id = re.match(self._VALID_URL, url).groups()
|
||||
s = self._download_json(
|
||||
'https://play-api.acast.com/stitch/%s/%s' % (channel, display_id),
|
||||
display_id)['result']
|
||||
'https://feeder.acast.com/api/v1/shows/%s/episodes/%s' % (channel, display_id),
|
||||
display_id)
|
||||
media_url = s['url']
|
||||
if re.search(r'[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12}', display_id):
|
||||
episode_url = s.get('episodeUrl')
|
||||
if episode_url:
|
||||
display_id = episode_url
|
||||
else:
|
||||
channel, display_id = re.match(self._VALID_URL, s['link']).groups()
|
||||
cast_data = self._download_json(
|
||||
'https://play-api.acast.com/splash/%s/%s' % (channel, display_id),
|
||||
display_id)['result']
|
||||
e = cast_data['episode']
|
||||
title = e['name']
|
||||
title = e.get('name') or s['title']
|
||||
return {
|
||||
'id': compat_str(e['id']),
|
||||
'display_id': display_id,
|
||||
'url': media_url,
|
||||
'title': title,
|
||||
'description': e.get('description') or e.get('summary'),
|
||||
'description': e.get('summary') or clean_html(e.get('description') or s.get('description')),
|
||||
'thumbnail': e.get('image'),
|
||||
'timestamp': unified_timestamp(e.get('publishingDate')),
|
||||
'duration': float_or_none(s.get('duration') or e.get('duration')),
|
||||
'timestamp': unified_timestamp(e.get('publishingDate') or s.get('publishDate')),
|
||||
'duration': float_or_none(e.get('duration') or s.get('duration')),
|
||||
'filesize': int_or_none(e.get('contentLength')),
|
||||
'creator': try_get(cast_data, lambda x: x['show']['author'], compat_str),
|
||||
'series': try_get(cast_data, lambda x: x['show']['name'], compat_str),
|
||||
|
@ -25,6 +25,11 @@ MSO_INFO = {
|
||||
'username_field': 'username',
|
||||
'password_field': 'password',
|
||||
},
|
||||
'ATT': {
|
||||
'name': 'AT&T U-verse',
|
||||
'username_field': 'userid',
|
||||
'password_field': 'password',
|
||||
},
|
||||
'ATTOTT': {
|
||||
'name': 'DIRECTV NOW',
|
||||
'username_field': 'email',
|
||||
|
@ -4,17 +4,10 @@ from __future__ import unicode_literals
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import (
|
||||
compat_parse_qs,
|
||||
compat_str,
|
||||
compat_urllib_parse_urlparse,
|
||||
)
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
find_xpath_attr,
|
||||
get_element_by_attribute,
|
||||
int_or_none,
|
||||
NO_DEFAULT,
|
||||
qualities,
|
||||
try_get,
|
||||
unified_strdate,
|
||||
@ -25,59 +18,7 @@ from ..utils import (
|
||||
# add tests.
|
||||
|
||||
|
||||
class ArteTvIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://videos\.arte\.tv/(?P<lang>fr|de|en|es)/.*-(?P<id>.*?)\.html'
|
||||
IE_NAME = 'arte.tv'
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
lang = mobj.group('lang')
|
||||
video_id = mobj.group('id')
|
||||
|
||||
ref_xml_url = url.replace('/videos/', '/do_delegate/videos/')
|
||||
ref_xml_url = ref_xml_url.replace('.html', ',view,asPlayerXml.xml')
|
||||
ref_xml_doc = self._download_xml(
|
||||
ref_xml_url, video_id, note='Downloading metadata')
|
||||
config_node = find_xpath_attr(ref_xml_doc, './/video', 'lang', lang)
|
||||
config_xml_url = config_node.attrib['ref']
|
||||
config = self._download_xml(
|
||||
config_xml_url, video_id, note='Downloading configuration')
|
||||
|
||||
formats = [{
|
||||
'format_id': q.attrib['quality'],
|
||||
# The playpath starts at 'mp4:', if we don't manually
|
||||
# split the url, rtmpdump will incorrectly parse them
|
||||
'url': q.text.split('mp4:', 1)[0],
|
||||
'play_path': 'mp4:' + q.text.split('mp4:', 1)[1],
|
||||
'ext': 'flv',
|
||||
'quality': 2 if q.attrib['quality'] == 'hd' else 1,
|
||||
} for q in config.findall('./urls/url')]
|
||||
self._sort_formats(formats)
|
||||
|
||||
title = config.find('.//name').text
|
||||
thumbnail = config.find('.//firstThumbnailUrl').text
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'thumbnail': thumbnail,
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
|
||||
class ArteTVBaseIE(InfoExtractor):
|
||||
@classmethod
|
||||
def _extract_url_info(cls, url):
|
||||
mobj = re.match(cls._VALID_URL, url)
|
||||
lang = mobj.group('lang')
|
||||
query = compat_parse_qs(compat_urllib_parse_urlparse(url).query)
|
||||
if 'vid' in query:
|
||||
video_id = query['vid'][0]
|
||||
else:
|
||||
# This is not a real id, it can be for example AJT for the news
|
||||
# http://www.arte.tv/guide/fr/emissions/AJT/arte-journal
|
||||
video_id = mobj.group('id')
|
||||
return video_id, lang
|
||||
|
||||
def _extract_from_json_url(self, json_url, video_id, lang, title=None):
|
||||
info = self._download_json(json_url, video_id)
|
||||
player_info = info['videoJsonPlayer']
|
||||
@ -108,13 +49,15 @@ class ArteTVBaseIE(InfoExtractor):
|
||||
'upload_date': unified_strdate(upload_date_str),
|
||||
'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'),
|
||||
}
|
||||
qfunc = qualities(['HQ', 'MQ', 'EQ', 'SQ'])
|
||||
qfunc = qualities(['MQ', 'HQ', 'EQ', 'SQ'])
|
||||
|
||||
LANGS = {
|
||||
'fr': 'F',
|
||||
'de': 'A',
|
||||
'en': 'E[ANG]',
|
||||
'es': 'E[ESP]',
|
||||
'it': 'E[ITA]',
|
||||
'pl': 'E[POL]',
|
||||
}
|
||||
|
||||
langcode = LANGS.get(lang, lang)
|
||||
@ -126,8 +69,8 @@ class ArteTVBaseIE(InfoExtractor):
|
||||
l = re.escape(langcode)
|
||||
|
||||
# Language preference from most to least priority
|
||||
# Reference: section 5.6.3 of
|
||||
# http://www.arte.tv/sites/en/corporate/files/complete-technical-guidelines-arte-geie-v1-05.pdf
|
||||
# Reference: section 6.8 of
|
||||
# https://www.arte.tv/sites/en/corporate/files/complete-technical-guidelines-arte-geie-v1-07-1.pdf
|
||||
PREFERENCES = (
|
||||
# original version in requested language, without subtitles
|
||||
r'VO{0}$'.format(l),
|
||||
@ -193,274 +136,59 @@ class ArteTVBaseIE(InfoExtractor):
|
||||
|
||||
class ArteTVPlus7IE(ArteTVBaseIE):
|
||||
IE_NAME = 'arte.tv:+7'
|
||||
_VALID_URL = r'https?://(?:(?:www|sites)\.)?arte\.tv/(?:[^/]+/)?(?P<lang>fr|de|en|es)/(?:videos/)?(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||
_VALID_URL = r'https?://(?:www\.)?arte\.tv/(?P<lang>fr|de|en|es|it|pl)/videos/(?P<id>\d{6}-\d{3}-[AF])'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://www.arte.tv/guide/de/sendungen/XEN/xenius/?vid=055918-015_PLUS7-D',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://sites.arte.tv/karambolage/de/video/karambolage-22',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.arte.tv/de/videos/048696-000-A/der-kluge-bauch-unser-zweites-gehirn',
|
||||
'only_matching': True,
|
||||
'url': 'https://www.arte.tv/en/videos/088501-000-A/mexico-stealing-petrol-to-survive/',
|
||||
'info_dict': {
|
||||
'id': '088501-000-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'Mexico: Stealing Petrol to Survive',
|
||||
'upload_date': '20190628',
|
||||
},
|
||||
}]
|
||||
|
||||
@classmethod
|
||||
def suitable(cls, url):
|
||||
return False if ArteTVPlaylistIE.suitable(url) else super(ArteTVPlus7IE, cls).suitable(url)
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id, lang = self._extract_url_info(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
return self._extract_from_webpage(webpage, video_id, lang)
|
||||
|
||||
def _extract_from_webpage(self, webpage, video_id, lang):
|
||||
patterns_templates = (r'arte_vp_url=["\'](.*?%s.*?)["\']', r'data-url=["\']([^"]+%s[^"]+)["\']')
|
||||
ids = (video_id, '')
|
||||
# some pages contain multiple videos (like
|
||||
# http://www.arte.tv/guide/de/sendungen/XEN/xenius/?vid=055918-015_PLUS7-D),
|
||||
# so we first try to look for json URLs that contain the video id from
|
||||
# the 'vid' parameter.
|
||||
patterns = [t % re.escape(_id) for _id in ids for t in patterns_templates]
|
||||
json_url = self._html_search_regex(
|
||||
patterns, webpage, 'json vp url', default=None)
|
||||
if not json_url:
|
||||
def find_iframe_url(webpage, default=NO_DEFAULT):
|
||||
return self._html_search_regex(
|
||||
r'<iframe[^>]+src=(["\'])(?P<url>.+\bjson_url=.+?)\1',
|
||||
webpage, 'iframe url', group='url', default=default)
|
||||
|
||||
iframe_url = find_iframe_url(webpage, None)
|
||||
if not iframe_url:
|
||||
embed_url = self._html_search_regex(
|
||||
r'arte_vp_url_oembed=\'([^\']+?)\'', webpage, 'embed url', default=None)
|
||||
if embed_url:
|
||||
player = self._download_json(
|
||||
embed_url, video_id, 'Downloading player page')
|
||||
iframe_url = find_iframe_url(player['html'])
|
||||
# en and es URLs produce react-based pages with different layout (e.g.
|
||||
# http://www.arte.tv/guide/en/053330-002-A/carnival-italy?zone=world)
|
||||
if not iframe_url:
|
||||
program = self._search_regex(
|
||||
r'program\s*:\s*({.+?["\']embed_html["\'].+?}),?\s*\n',
|
||||
webpage, 'program', default=None)
|
||||
if program:
|
||||
embed_html = self._parse_json(program, video_id)
|
||||
if embed_html:
|
||||
iframe_url = find_iframe_url(embed_html['embed_html'])
|
||||
if iframe_url:
|
||||
json_url = compat_parse_qs(
|
||||
compat_urllib_parse_urlparse(iframe_url).query)['json_url'][0]
|
||||
if json_url:
|
||||
title = self._search_regex(
|
||||
r'<h3[^>]+title=(["\'])(?P<title>.+?)\1',
|
||||
webpage, 'title', default=None, group='title')
|
||||
return self._extract_from_json_url(json_url, video_id, lang, title=title)
|
||||
# Different kind of embed URL (e.g.
|
||||
# http://www.arte.tv/magazine/trepalium/fr/episode-0406-replay-trepalium)
|
||||
entries = [
|
||||
self.url_result(url)
|
||||
for _, url in re.findall(r'<iframe[^>]+src=(["\'])(?P<url>.+?)\1', webpage)]
|
||||
return self.playlist_result(entries)
|
||||
|
||||
|
||||
# It also uses the arte_vp_url url from the webpage to extract the information
|
||||
class ArteTVCreativeIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:creative'
|
||||
_VALID_URL = r'https?://creative\.arte\.tv/(?P<lang>fr|de|en|es)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://creative.arte.tv/fr/episode/osmosis-episode-1',
|
||||
'info_dict': {
|
||||
'id': '057405-001-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'OSMOSIS - N\'AYEZ PLUS PEUR D\'AIMER (1)',
|
||||
'upload_date': '20150716',
|
||||
},
|
||||
}, {
|
||||
'url': 'http://creative.arte.tv/fr/Monty-Python-Reunion',
|
||||
'playlist_count': 11,
|
||||
'add_ie': ['Youtube'],
|
||||
}, {
|
||||
'url': 'http://creative.arte.tv/de/episode/agentur-amateur-4-der-erste-kunde',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
|
||||
class ArteTVInfoIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:info'
|
||||
_VALID_URL = r'https?://info\.arte\.tv/(?P<lang>fr|de|en|es)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://info.arte.tv/fr/service-civique-un-cache-misere',
|
||||
'info_dict': {
|
||||
'id': '067528-000-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'Service civique, un cache misère ?',
|
||||
'upload_date': '20160403',
|
||||
},
|
||||
}]
|
||||
|
||||
|
||||
class ArteTVFutureIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:future'
|
||||
_VALID_URL = r'https?://future\.arte\.tv/(?P<lang>fr|de|en|es)/(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://future.arte.tv/fr/info-sciences/les-ecrevisses-aussi-sont-anxieuses',
|
||||
'info_dict': {
|
||||
'id': '050940-028-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'Les écrevisses aussi peuvent être anxieuses',
|
||||
'upload_date': '20140902',
|
||||
},
|
||||
}, {
|
||||
'url': 'http://future.arte.tv/fr/la-science-est-elle-responsable',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
|
||||
class ArteTVDDCIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:ddc'
|
||||
_VALID_URL = r'https?://ddc\.arte\.tv/(?P<lang>emission|folge)/(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = []
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id, lang = self._extract_url_info(url)
|
||||
if lang == 'folge':
|
||||
lang = 'de'
|
||||
elif lang == 'emission':
|
||||
lang = 'fr'
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
scriptElement = get_element_by_attribute('class', 'visu_video_block', webpage)
|
||||
script_url = self._html_search_regex(r'src="(.*?)"', scriptElement, 'script url')
|
||||
javascriptPlayerGenerator = self._download_webpage(script_url, video_id, 'Download javascript player generator')
|
||||
json_url = self._search_regex(r"json_url=(.*)&rendering_place.*", javascriptPlayerGenerator, 'json url')
|
||||
return self._extract_from_json_url(json_url, video_id, lang)
|
||||
|
||||
|
||||
class ArteTVConcertIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:concert'
|
||||
_VALID_URL = r'https?://concert\.arte\.tv/(?P<lang>fr|de|en|es)/(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://concert.arte.tv/de/notwist-im-pariser-konzertclub-divan-du-monde',
|
||||
'md5': '9ea035b7bd69696b67aa2ccaaa218161',
|
||||
'info_dict': {
|
||||
'id': '186',
|
||||
'ext': 'mp4',
|
||||
'title': 'The Notwist im Pariser Konzertclub "Divan du Monde"',
|
||||
'upload_date': '20140128',
|
||||
'description': 'md5:486eb08f991552ade77439fe6d82c305',
|
||||
},
|
||||
}]
|
||||
|
||||
|
||||
class ArteTVCinemaIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:cinema'
|
||||
_VALID_URL = r'https?://cinema\.arte\.tv/(?P<lang>fr|de|en|es)/(?P<id>.+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://cinema.arte.tv/fr/article/les-ailes-du-desir-de-julia-reck',
|
||||
'md5': 'a5b9dd5575a11d93daf0e3f404f45438',
|
||||
'info_dict': {
|
||||
'id': '062494-000-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'Film lauréat du concours web - "Les ailes du désir" de Julia Reck',
|
||||
'upload_date': '20150807',
|
||||
},
|
||||
}]
|
||||
|
||||
|
||||
class ArteTVMagazineIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:magazine'
|
||||
_VALID_URL = r'https?://(?:www\.)?arte\.tv/magazine/[^/]+/(?P<lang>fr|de|en|es)/(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
# Embedded via <iframe src="http://www.arte.tv/arte_vp/index.php?json_url=..."
|
||||
'url': 'http://www.arte.tv/magazine/trepalium/fr/entretien-avec-le-realisateur-vincent-lannoo-trepalium',
|
||||
'md5': '2a9369bcccf847d1c741e51416299f25',
|
||||
'info_dict': {
|
||||
'id': '065965-000-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'Trepalium - Extrait Ep.01',
|
||||
'upload_date': '20160121',
|
||||
},
|
||||
}, {
|
||||
# Embedded via <iframe src="http://www.arte.tv/guide/fr/embed/054813-004-A/medium"
|
||||
'url': 'http://www.arte.tv/magazine/trepalium/fr/episode-0406-replay-trepalium',
|
||||
'md5': 'fedc64fc7a946110fe311634e79782ca',
|
||||
'info_dict': {
|
||||
'id': '054813-004_PLUS7-F',
|
||||
'ext': 'mp4',
|
||||
'title': 'Trepalium (4/6)',
|
||||
'description': 'md5:10057003c34d54e95350be4f9b05cb40',
|
||||
'upload_date': '20160218',
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.arte.tv/magazine/metropolis/de/frank-woeste-german-paris-metropolis',
|
||||
'only_matching': True,
|
||||
}]
|
||||
lang, video_id = re.match(self._VALID_URL, url).groups()
|
||||
return self._extract_from_json_url(
|
||||
'https://api.arte.tv/api/player/v1/config/%s/%s' % (lang, video_id),
|
||||
video_id, lang)
|
||||
|
||||
|
||||
class ArteTVEmbedIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'arte.tv:embed'
|
||||
_VALID_URL = r'''(?x)
|
||||
http://www\.arte\.tv
|
||||
/(?:playerv2/embed|arte_vp/index)\.php\?json_url=
|
||||
https://www\.arte\.tv
|
||||
/player/v3/index\.php\?json_url=
|
||||
(?P<json_url>
|
||||
http://arte\.tv/papi/tvguide/videos/stream/player/
|
||||
(?P<lang>[^/]+)/(?P<id>[^/]+)[^&]*
|
||||
https?://api\.arte\.tv/api/player/v1/config/
|
||||
(?P<lang>[^/]+)/(?P<id>\d{6}-\d{3}-[AF])
|
||||
)
|
||||
'''
|
||||
|
||||
_TESTS = []
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
video_id = mobj.group('id')
|
||||
lang = mobj.group('lang')
|
||||
json_url = mobj.group('json_url')
|
||||
json_url, lang, video_id = re.match(self._VALID_URL, url).groups()
|
||||
return self._extract_from_json_url(json_url, video_id, lang)
|
||||
|
||||
|
||||
class TheOperaPlatformIE(ArteTVPlus7IE):
|
||||
IE_NAME = 'theoperaplatform'
|
||||
_VALID_URL = r'https?://(?:www\.)?theoperaplatform\.eu/(?P<lang>fr|de|en|es)/(?P<id>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://www.theoperaplatform.eu/de/opera/verdi-otello',
|
||||
'md5': '970655901fa2e82e04c00b955e9afe7b',
|
||||
'info_dict': {
|
||||
'id': '060338-009-A',
|
||||
'ext': 'mp4',
|
||||
'title': 'Verdi - OTELLO',
|
||||
'upload_date': '20160927',
|
||||
},
|
||||
}]
|
||||
|
||||
|
||||
class ArteTVPlaylistIE(ArteTVBaseIE):
|
||||
IE_NAME = 'arte.tv:playlist'
|
||||
_VALID_URL = r'https?://(?:www\.)?arte\.tv/guide/(?P<lang>fr|de|en|es)/[^#]*#collection/(?P<id>PL-\d+)'
|
||||
_VALID_URL = r'https?://(?:www\.)?arte\.tv/(?P<lang>fr|de|en|es|it|pl)/videos/(?P<id>RC-\d{6})'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'http://www.arte.tv/guide/de/plus7/?country=DE#collection/PL-013263/ARTETV',
|
||||
'url': 'https://www.arte.tv/en/videos/RC-016954/earn-a-living/',
|
||||
'info_dict': {
|
||||
'id': 'PL-013263',
|
||||
'title': 'Areva & Uramin',
|
||||
'description': 'md5:a1dc0312ce357c262259139cfd48c9bf',
|
||||
'id': 'RC-016954',
|
||||
'title': 'Earn a Living',
|
||||
'description': 'md5:d322c55011514b3a7241f7fb80d494c2',
|
||||
},
|
||||
'playlist_mincount': 6,
|
||||
}, {
|
||||
'url': 'http://www.arte.tv/guide/de/playlists?country=DE#collection/PL-013190/ARTETV',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id, lang = self._extract_url_info(url)
|
||||
lang, playlist_id = re.match(self._VALID_URL, url).groups()
|
||||
collection = self._download_json(
|
||||
'https://api.arte.tv/api/player/v1/collectionData/%s/%s?source=videos'
|
||||
% (lang, playlist_id), playlist_id)
|
||||
|
@ -99,8 +99,8 @@ class BeamProLiveIE(BeamProBaseIE):
|
||||
|
||||
class BeamProVodIE(BeamProBaseIE):
|
||||
IE_NAME = 'Mixer:vod'
|
||||
_VALID_URL = r'https?://(?:\w+\.)?(?:beam\.pro|mixer\.com)/[^/?#&]+\?.*?\bvod=(?P<id>\d+)'
|
||||
_TEST = {
|
||||
_VALID_URL = r'https?://(?:\w+\.)?(?:beam\.pro|mixer\.com)/[^/?#&]+\?.*?\bvod=(?P<id>[^?#&]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://mixer.com/willow8714?vod=2259830',
|
||||
'md5': 'b2431e6e8347dc92ebafb565d368b76b',
|
||||
'info_dict': {
|
||||
@ -119,7 +119,13 @@ class BeamProVodIE(BeamProBaseIE):
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}
|
||||
}, {
|
||||
'url': 'https://mixer.com/streamer?vod=IxFno1rqC0S_XJ1a2yGgNw',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://mixer.com/streamer?vod=Rh3LY0VAqkGpEQUe2pN-ig',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
def _extract_format(vod, vod_type):
|
||||
|
@ -1,7 +1,10 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..compat import (
|
||||
compat_str,
|
||||
compat_urlparse,
|
||||
)
|
||||
from ..utils import (
|
||||
int_or_none,
|
||||
unified_timestamp,
|
||||
@ -11,6 +14,7 @@ from ..utils import (
|
||||
class BeegIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?beeg\.(?:com|porn(?:/video)?)/(?P<id>\d+)'
|
||||
_TESTS = [{
|
||||
# api/v6 v1
|
||||
'url': 'http://beeg.com/5416503',
|
||||
'md5': 'a1a1b1a8bc70a89e49ccfd113aed0820',
|
||||
'info_dict': {
|
||||
@ -24,6 +28,10 @@ class BeegIE(InfoExtractor):
|
||||
'tags': list,
|
||||
'age_limit': 18,
|
||||
}
|
||||
}, {
|
||||
# api/v6 v2
|
||||
'url': 'https://beeg.com/1941093077?t=911-1391',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://beeg.porn/video/5416503',
|
||||
'only_matching': True,
|
||||
@ -41,11 +49,22 @@ class BeegIE(InfoExtractor):
|
||||
r'beeg_version\s*=\s*([\da-zA-Z_-]+)', webpage, 'beeg version',
|
||||
default='1546225636701')
|
||||
|
||||
qs = compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
|
||||
t = qs.get('t', [''])[0].split('-')
|
||||
if len(t) > 1:
|
||||
query = {
|
||||
'v': 2,
|
||||
's': t[0],
|
||||
'e': t[1],
|
||||
}
|
||||
else:
|
||||
query = {'v': 1}
|
||||
|
||||
for api_path in ('', 'api.'):
|
||||
video = self._download_json(
|
||||
'https://%sbeeg.com/api/v6/%s/video/%s'
|
||||
% (api_path, beeg_version, video_id), video_id,
|
||||
fatal=api_path == 'api.')
|
||||
fatal=api_path == 'api.', query=query)
|
||||
if video:
|
||||
break
|
||||
|
||||
|
@ -483,7 +483,7 @@ class BrightcoveLegacyIE(InfoExtractor):
|
||||
|
||||
class BrightcoveNewIE(AdobePassIE):
|
||||
IE_NAME = 'brightcove:new'
|
||||
_VALID_URL = r'https?://players\.brightcove\.net/(?P<account_id>\d+)/(?P<player_id>[^/]+)_(?P<embed>[^/]+)/index\.html\?.*videoId=(?P<video_id>\d+|ref:[^&]+)'
|
||||
_VALID_URL = r'https?://players\.brightcove\.net/(?P<account_id>\d+)/(?P<player_id>[^/]+)_(?P<embed>[^/]+)/index\.html\?.*(?P<content_type>video|playlist)Id=(?P<video_id>\d+|ref:[^&]+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://players.brightcove.net/929656772001/e41d32dc-ec74-459e-a845-6c69f7b724ea_default/index.html?videoId=4463358922001',
|
||||
'md5': 'c8100925723840d4b0d243f7025703be',
|
||||
@ -516,6 +516,21 @@ class BrightcoveNewIE(AdobePassIE):
|
||||
# m3u8 download
|
||||
'skip_download': True,
|
||||
}
|
||||
}, {
|
||||
# playlist stream
|
||||
'url': 'https://players.brightcove.net/1752604059001/S13cJdUBz_default/index.html?playlistId=5718313430001',
|
||||
'info_dict': {
|
||||
'id': '5718313430001',
|
||||
'title': 'No Audio Playlist',
|
||||
},
|
||||
'playlist_count': 7,
|
||||
'params': {
|
||||
# m3u8 download
|
||||
'skip_download': True,
|
||||
}
|
||||
}, {
|
||||
'url': 'http://players.brightcove.net/5690807595001/HyZNerRl7_default/index.html?playlistId=5743160747001',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# ref: prefixed video id
|
||||
'url': 'http://players.brightcove.net/3910869709001/21519b5c-4b3b-4363-accb-bdc8f358f823_default/index.html?videoId=ref:7069442',
|
||||
@ -715,7 +730,7 @@ class BrightcoveNewIE(AdobePassIE):
|
||||
'ip_blocks': smuggled_data.get('geo_ip_blocks'),
|
||||
})
|
||||
|
||||
account_id, player_id, embed, video_id = re.match(self._VALID_URL, url).groups()
|
||||
account_id, player_id, embed, content_type, video_id = re.match(self._VALID_URL, url).groups()
|
||||
|
||||
webpage = self._download_webpage(
|
||||
'http://players.brightcove.net/%s/%s_%s/index.min.js'
|
||||
@ -736,7 +751,7 @@ class BrightcoveNewIE(AdobePassIE):
|
||||
r'policyKey\s*:\s*(["\'])(?P<pk>.+?)\1',
|
||||
webpage, 'policy key', group='pk')
|
||||
|
||||
api_url = 'https://edge.api.brightcove.com/playback/v1/accounts/%s/videos/%s' % (account_id, video_id)
|
||||
api_url = 'https://edge.api.brightcove.com/playback/v1/accounts/%s/%ss/%s' % (account_id, content_type, video_id)
|
||||
headers = {
|
||||
'Accept': 'application/json;pk=%s' % policy_key,
|
||||
}
|
||||
@ -771,5 +786,12 @@ class BrightcoveNewIE(AdobePassIE):
|
||||
'tveToken': tve_token,
|
||||
})
|
||||
|
||||
if content_type == 'playlist':
|
||||
return self.playlist_result(
|
||||
[self._parse_brightcove_metadata(vid, vid.get('id'), headers)
|
||||
for vid in json_data.get('videos', []) if vid.get('id')],
|
||||
json_data.get('id'), json_data.get('name'),
|
||||
json_data.get('description'))
|
||||
|
||||
return self._parse_brightcove_metadata(
|
||||
json_data, video_id, headers=headers)
|
||||
|
@ -103,19 +103,6 @@ class CrunchyrollBaseIE(InfoExtractor):
|
||||
def _real_initialize(self):
|
||||
self._login()
|
||||
|
||||
def _download_webpage(self, url_or_request, *args, **kwargs):
|
||||
request = (url_or_request if isinstance(url_or_request, compat_urllib_request.Request)
|
||||
else sanitized_Request(url_or_request))
|
||||
# Accept-Language must be set explicitly to accept any language to avoid issues
|
||||
# similar to https://github.com/ytdl-org/youtube-dl/issues/6797.
|
||||
# Along with IP address Crunchyroll uses Accept-Language to guess whether georestriction
|
||||
# should be imposed or not (from what I can see it just takes the first language
|
||||
# ignoring the priority and requires it to correspond the IP). By the way this causes
|
||||
# Crunchyroll to not work in georestriction cases in some browsers that don't place
|
||||
# the locale lang first in header. However allowing any language seems to workaround the issue.
|
||||
request.add_header('Accept-Language', '*')
|
||||
return super(CrunchyrollBaseIE, self)._download_webpage(request, *args, **kwargs)
|
||||
|
||||
@staticmethod
|
||||
def _add_skip_wall(url):
|
||||
parsed_url = compat_urlparse.urlparse(url)
|
||||
@ -269,6 +256,19 @@ class CrunchyrollIE(CrunchyrollBaseIE, VRVIE):
|
||||
'1080': ('80', '108'),
|
||||
}
|
||||
|
||||
def _download_webpage(self, url_or_request, *args, **kwargs):
|
||||
request = (url_or_request if isinstance(url_or_request, compat_urllib_request.Request)
|
||||
else sanitized_Request(url_or_request))
|
||||
# Accept-Language must be set explicitly to accept any language to avoid issues
|
||||
# similar to https://github.com/ytdl-org/youtube-dl/issues/6797.
|
||||
# Along with IP address Crunchyroll uses Accept-Language to guess whether georestriction
|
||||
# should be imposed or not (from what I can see it just takes the first language
|
||||
# ignoring the priority and requires it to correspond the IP). By the way this causes
|
||||
# Crunchyroll to not work in georestriction cases in some browsers that don't place
|
||||
# the locale lang first in header. However allowing any language seems to workaround the issue.
|
||||
request.add_header('Accept-Language', '*')
|
||||
return super(CrunchyrollBaseIE, self)._download_webpage(request, *args, **kwargs)
|
||||
|
||||
def _decrypt_subtitles(self, data, iv, id):
|
||||
data = bytes_to_intlist(compat_b64decode(data))
|
||||
iv = bytes_to_intlist(compat_b64decode(iv))
|
||||
@ -661,9 +661,8 @@ class CrunchyrollShowPlaylistIE(CrunchyrollBaseIE):
|
||||
webpage = self._download_webpage(
|
||||
self._add_skip_wall(url), show_id,
|
||||
headers=self.geo_verification_headers())
|
||||
title = self._html_search_regex(
|
||||
r'(?s)<h1[^>]*>\s*<span itemprop="name">(.*?)</span>',
|
||||
webpage, 'title')
|
||||
title = self._html_search_meta('name', webpage, default=None)
|
||||
|
||||
episode_paths = re.findall(
|
||||
r'(?s)<li id="showview_videos_media_(\d+)"[^>]+>.*?<a href="([^"]+)"',
|
||||
webpage)
|
||||
|
@ -137,10 +137,16 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
|
||||
|
||||
@staticmethod
|
||||
def _extract_urls(webpage):
|
||||
urls = []
|
||||
# Look for embedded Dailymotion player
|
||||
matches = re.findall(
|
||||
r'<(?:(?:embed|iframe)[^>]+?src=|input[^>]+id=[\'"]dmcloudUrlEmissionSelect[\'"][^>]+value=)(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/(?:embed|swf)/video/.+?)\1', webpage)
|
||||
return list(map(lambda m: unescapeHTML(m[1]), matches))
|
||||
# https://developer.dailymotion.com/player#player-parameters
|
||||
for mobj in re.finditer(
|
||||
r'<(?:(?:embed|iframe)[^>]+?src=|input[^>]+id=[\'"]dmcloudUrlEmissionSelect[\'"][^>]+value=)(["\'])(?P<url>(?:https?:)?//(?:www\.)?dailymotion\.com/(?:embed|swf)/video/.+?)\1', webpage):
|
||||
urls.append(unescapeHTML(mobj.group('url')))
|
||||
for mobj in re.finditer(
|
||||
r'(?s)DM\.player\([^,]+,\s*{.*?video[\'"]?\s*:\s*["\']?(?P<id>[0-9a-zA-Z]+).+?}\s*\);', webpage):
|
||||
urls.append('https://www.dailymotion.com/embed/video/' + mobj.group('id'))
|
||||
return urls
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
@ -24,7 +24,7 @@ from ..utils import (
|
||||
|
||||
|
||||
class DRTVIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?dr\.dk/(?:tv/se|nyheder|radio/ondemand)/(?:[^/]+/)*(?P<id>[\da-z-]+)(?:[/#?]|$)'
|
||||
_VALID_URL = r'https?://(?:www\.)?dr\.dk/(?:tv/se|nyheder|radio(?:/ondemand)?)/(?:[^/]+/)*(?P<id>[\da-z-]+)(?:[/#?]|$)'
|
||||
_GEO_BYPASS = False
|
||||
_GEO_COUNTRIES = ['DK']
|
||||
IE_NAME = 'drtv'
|
||||
@ -80,6 +80,9 @@ class DRTVIE(InfoExtractor):
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.dr.dk/radio/p4kbh/regionale-nyheder-kh4/p4-nyheder-2019-06-26-17-30-9',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
|
@ -58,17 +58,8 @@ from .ard import (
|
||||
ARDMediathekIE,
|
||||
)
|
||||
from .arte import (
|
||||
ArteTvIE,
|
||||
ArteTVPlus7IE,
|
||||
ArteTVCreativeIE,
|
||||
ArteTVConcertIE,
|
||||
ArteTVInfoIE,
|
||||
ArteTVFutureIE,
|
||||
ArteTVCinemaIE,
|
||||
ArteTVDDCIE,
|
||||
ArteTVMagazineIE,
|
||||
ArteTVEmbedIE,
|
||||
TheOperaPlatformIE,
|
||||
ArteTVPlaylistIE,
|
||||
)
|
||||
from .asiancrush import (
|
||||
@ -745,7 +736,6 @@ from .nexx import (
|
||||
NexxIE,
|
||||
NexxEmbedIE,
|
||||
)
|
||||
from .nfb import NFBIE
|
||||
from .nfl import NFLIE
|
||||
from .nhk import NhkVodIE
|
||||
from .nhl import NHLIE
|
||||
@ -892,8 +882,9 @@ from .porncom import PornComIE
|
||||
from .pornhd import PornHdIE
|
||||
from .pornhub import (
|
||||
PornHubIE,
|
||||
PornHubPlaylistIE,
|
||||
PornHubUserVideosIE,
|
||||
PornHubUserIE,
|
||||
PornHubPagedVideoListIE,
|
||||
PornHubUserVideosUploadIE,
|
||||
)
|
||||
from .pornotube import PornotubeIE
|
||||
from .pornovoisines import PornoVoisinesIE
|
||||
|
@ -1,35 +1,84 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from .ooyala import OoyalaIE
|
||||
from ..utils import (
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
mimetype2ext,
|
||||
parse_iso8601,
|
||||
)
|
||||
|
||||
|
||||
class FusionIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?fusion\.(?:net|tv)/video/(?P<id>\d+)'
|
||||
_VALID_URL = r'https?://(?:www\.)?fusion\.(?:net|tv)/(?:video/|show/.+?\bvideo=)(?P<id>\d+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://fusion.tv/video/201781/u-s-and-panamanian-forces-work-together-to-stop-a-vessel-smuggling-drugs/',
|
||||
'info_dict': {
|
||||
'id': 'ZpcWNoMTE6x6uVIIWYpHh0qQDjxBuq5P',
|
||||
'id': '3145868',
|
||||
'ext': 'mp4',
|
||||
'title': 'U.S. and Panamanian forces work together to stop a vessel smuggling drugs',
|
||||
'description': 'md5:0cc84a9943c064c0f46b128b41b1b0d7',
|
||||
'duration': 140.0,
|
||||
'timestamp': 1442589635,
|
||||
'uploader': 'UNIVISON',
|
||||
'upload_date': '20150918',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
'add_ie': ['Ooyala'],
|
||||
'add_ie': ['Anvato'],
|
||||
}, {
|
||||
'url': 'http://fusion.tv/video/201781',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://fusion.tv/show/food-exposed-with-nelufar-hedayat/?ancla=full-episodes&video=588644',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
video_id = self._match_id(url)
|
||||
video = self._download_json(
|
||||
'https://platform.fusion.net/wp-json/fusiondotnet/v1/video/' + video_id, video_id)
|
||||
|
||||
ooyala_code = self._search_regex(
|
||||
r'data-ooyala-id=(["\'])(?P<code>(?:(?!\1).)+)\1',
|
||||
webpage, 'ooyala code', group='code')
|
||||
info = {
|
||||
'id': video_id,
|
||||
'title': video['title'],
|
||||
'description': video.get('excerpt'),
|
||||
'timestamp': parse_iso8601(video.get('published')),
|
||||
'series': video.get('show'),
|
||||
}
|
||||
|
||||
return OoyalaIE._build_url_result(ooyala_code)
|
||||
formats = []
|
||||
src = video.get('src') or {}
|
||||
for f_id, f in src.items():
|
||||
for q_id, q in f.items():
|
||||
q_url = q.get('url')
|
||||
if not q_url:
|
||||
continue
|
||||
ext = determine_ext(q_url, mimetype2ext(q.get('type')))
|
||||
if ext == 'smil':
|
||||
formats.extend(self._extract_smil_formats(q_url, video_id, fatal=False))
|
||||
elif f_id == 'm3u8-variant' or (ext == 'm3u8' and q_id == 'Variant'):
|
||||
formats.extend(self._extract_m3u8_formats(
|
||||
q_url, video_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False))
|
||||
else:
|
||||
formats.append({
|
||||
'format_id': '-'.join([f_id, q_id]),
|
||||
'url': q_url,
|
||||
'width': int_or_none(q.get('width')),
|
||||
'height': int_or_none(q.get('height')),
|
||||
'tbr': int_or_none(self._search_regex(r'_(\d+)\.m(?:p4|3u8)', q_url, 'bitrate')),
|
||||
'ext': 'mp4' if ext == 'm3u8' else ext,
|
||||
'protocol': 'm3u8_native' if ext == 'm3u8' else 'https',
|
||||
})
|
||||
if formats:
|
||||
self._sort_formats(formats)
|
||||
info['formats'] = formats
|
||||
else:
|
||||
info.update({
|
||||
'_type': 'url',
|
||||
'url': 'anvato:uni:' + video['video_ids']['anvato'],
|
||||
'ie_key': 'Anvato',
|
||||
})
|
||||
|
||||
return info
|
||||
|
@ -2104,6 +2104,23 @@ class GenericIE(InfoExtractor):
|
||||
},
|
||||
'expected_warnings': ['Failed to download MPD manifest'],
|
||||
},
|
||||
{
|
||||
# DailyMotion embed with DM.player
|
||||
'url': 'https://www.beinsports.com/us/copa-del-rey/video/the-locker-room-valencia-beat-barca-in-copa/1203804',
|
||||
'info_dict': {
|
||||
'id': 'k6aKkGHd9FJs4mtJN39',
|
||||
'ext': 'mp4',
|
||||
'title': 'The Locker Room: Valencia Beat Barca In Copa del Rey Final',
|
||||
'description': 'This video is private.',
|
||||
'uploader_id': 'x1jf30l',
|
||||
'uploader': 'beIN SPORTS USA',
|
||||
'upload_date': '20190528',
|
||||
'timestamp': 1559062971,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
},
|
||||
# {
|
||||
# # TODO: find another test
|
||||
# # http://schema.org/VideoObject
|
||||
|
@ -34,9 +34,13 @@ class GoIE(AdobePassIE):
|
||||
'watchdisneyxd': {
|
||||
'brand': '009',
|
||||
'resource_id': 'DisneyXD',
|
||||
},
|
||||
'disneynow': {
|
||||
'brand': '011',
|
||||
'resource_id': 'Disney',
|
||||
}
|
||||
}
|
||||
_VALID_URL = r'https?://(?:(?P<sub_domain>%s)\.)?go\.com/(?:(?:[^/]+/)*(?P<id>vdka\w+)|(?:[^/]+/)*(?P<display_id>[^/?#]+))'\
|
||||
_VALID_URL = r'https?://(?:(?:(?P<sub_domain>%s)\.)?go|(?P<sub_domain_2>disneynow))\.com/(?:(?:[^/]+/)*(?P<id>vdka\w+)|(?:[^/]+/)*(?P<display_id>[^/?#]+))'\
|
||||
% '|'.join(list(_SITE_INFO.keys()) + ['disneynow'])
|
||||
_TESTS = [{
|
||||
'url': 'http://abc.go.com/shows/designated-survivor/video/most-recent/VDKA3807643',
|
||||
@ -71,6 +75,9 @@ class GoIE(AdobePassIE):
|
||||
# brand 008
|
||||
'url': 'http://disneynow.go.com/shows/minnies-bow-toons/video/happy-campers/vdka4872013',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://disneynow.com/shows/minnies-bow-toons/video/happy-campers/vdka4872013',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _extract_videos(self, brand, video_id='-1', show_id='-1'):
|
||||
@ -80,7 +87,9 @@ class GoIE(AdobePassIE):
|
||||
display_id)['video']
|
||||
|
||||
def _real_extract(self, url):
|
||||
sub_domain, video_id, display_id = re.match(self._VALID_URL, url).groups()
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
sub_domain = mobj.group('sub_domain') or mobj.group('sub_domain_2')
|
||||
video_id, display_id = mobj.group('id', 'display_id')
|
||||
site_info = self._SITE_INFO.get(sub_domain, {})
|
||||
brand = site_info.get('brand')
|
||||
if not video_id or not site_info:
|
||||
@ -89,7 +98,7 @@ class GoIE(AdobePassIE):
|
||||
# There may be inner quotes, e.g. data-video-id="'VDKA3609139'"
|
||||
# from http://freeform.go.com/shows/shadowhunters/episodes/season-2/1-this-guilty-blood
|
||||
r'data-video-id=["\']*(VDKA\w+)', webpage, 'video id',
|
||||
default=None)
|
||||
default=video_id)
|
||||
if not site_info:
|
||||
brand = self._search_regex(
|
||||
(r'data-brand=\s*["\']\s*(\d+)',
|
||||
|
@ -6,8 +6,8 @@ import re
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
determine_ext,
|
||||
extract_attributes,
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
@ -19,6 +19,7 @@ from ..utils import (
|
||||
|
||||
|
||||
class LecturioBaseIE(InfoExtractor):
|
||||
_API_BASE_URL = 'https://app.lecturio.com/api/en/latest/html5/'
|
||||
_LOGIN_URL = 'https://app.lecturio.com/en/login'
|
||||
_NETRC_MACHINE = 'lecturio'
|
||||
|
||||
@ -67,51 +68,56 @@ class LecturioIE(LecturioBaseIE):
|
||||
_VALID_URL = r'''(?x)
|
||||
https://
|
||||
(?:
|
||||
app\.lecturio\.com/[^/]+/(?P<id>[^/?#&]+)\.lecture|
|
||||
(?:www\.)?lecturio\.de/[^/]+/(?P<id_de>[^/?#&]+)\.vortrag
|
||||
app\.lecturio\.com/([^/]+/(?P<nt>[^/?#&]+)\.lecture|(?:\#/)?lecture/c/\d+/(?P<id>\d+))|
|
||||
(?:www\.)?lecturio\.de/[^/]+/(?P<nt_de>[^/?#&]+)\.vortrag
|
||||
)
|
||||
'''
|
||||
_TESTS = [{
|
||||
'url': 'https://app.lecturio.com/medical-courses/important-concepts-and-terms-introduction-to-microbiology.lecture#tab/videos',
|
||||
'md5': 'f576a797a5b7a5e4e4bbdfc25a6a6870',
|
||||
'md5': '9a42cf1d8282a6311bf7211bbde26fde',
|
||||
'info_dict': {
|
||||
'id': '39634',
|
||||
'ext': 'mp4',
|
||||
'title': 'Important Concepts and Terms – Introduction to Microbiology',
|
||||
'title': 'Important Concepts and Terms — Introduction to Microbiology',
|
||||
},
|
||||
'skip': 'Requires lecturio account credentials',
|
||||
}, {
|
||||
'url': 'https://www.lecturio.de/jura/oeffentliches-recht-staatsexamen.vortrag',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://app.lecturio.com/#/lecture/c/6434/39634',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
_CC_LANGS = {
|
||||
'Arabic': 'ar',
|
||||
'Bulgarian': 'bg',
|
||||
'German': 'de',
|
||||
'English': 'en',
|
||||
'Spanish': 'es',
|
||||
'Persian': 'fa',
|
||||
'French': 'fr',
|
||||
'Japanese': 'ja',
|
||||
'Polish': 'pl',
|
||||
'Pashto': 'ps',
|
||||
'Russian': 'ru',
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
display_id = mobj.group('id') or mobj.group('id_de')
|
||||
|
||||
webpage = self._download_webpage(
|
||||
'https://app.lecturio.com/en/lecture/%s/player.html' % display_id,
|
||||
display_id)
|
||||
|
||||
lecture_id = self._search_regex(
|
||||
r'lecture_id\s*=\s*(?:L_)?(\d+)', webpage, 'lecture id')
|
||||
|
||||
api_url = self._search_regex(
|
||||
r'lectureDataLink\s*:\s*(["\'])(?P<url>(?:(?!\1).)+)\1', webpage,
|
||||
'api url', group='url')
|
||||
|
||||
video = self._download_json(api_url, display_id)
|
||||
|
||||
nt = mobj.group('nt') or mobj.group('nt_de')
|
||||
lecture_id = mobj.group('id')
|
||||
display_id = nt or lecture_id
|
||||
api_path = 'lectures/' + lecture_id if lecture_id else 'lecture/' + nt + '.json'
|
||||
video = self._download_json(
|
||||
self._API_BASE_URL + api_path, display_id)
|
||||
title = video['title'].strip()
|
||||
if not lecture_id:
|
||||
pid = video.get('productId') or video.get('uid')
|
||||
if pid:
|
||||
spid = pid.split('_')
|
||||
if spid and len(spid) == 2:
|
||||
lecture_id = spid[1]
|
||||
|
||||
formats = []
|
||||
for format_ in video['content']['media']:
|
||||
@ -129,24 +135,30 @@ class LecturioIE(LecturioBaseIE):
|
||||
continue
|
||||
label = str_or_none(format_.get('label'))
|
||||
filesize = int_or_none(format_.get('fileSize'))
|
||||
formats.append({
|
||||
f = {
|
||||
'url': file_url,
|
||||
'format_id': label,
|
||||
'filesize': float_or_none(filesize, invscale=1000)
|
||||
})
|
||||
}
|
||||
if label:
|
||||
mobj = re.match(r'(\d+)p\s*\(([^)]+)\)', label)
|
||||
if mobj:
|
||||
f.update({
|
||||
'format_id': mobj.group(2),
|
||||
'height': int(mobj.group(1)),
|
||||
})
|
||||
formats.append(f)
|
||||
self._sort_formats(formats)
|
||||
|
||||
subtitles = {}
|
||||
automatic_captions = {}
|
||||
cc = self._parse_json(
|
||||
self._search_regex(
|
||||
r'subtitleUrls\s*:\s*({.+?})\s*,', webpage, 'subtitles',
|
||||
default='{}'), display_id, fatal=False)
|
||||
for cc_label, cc_url in cc.items():
|
||||
cc_url = url_or_none(cc_url)
|
||||
captions = video.get('captions') or []
|
||||
for cc in captions:
|
||||
cc_url = cc.get('url')
|
||||
if not cc_url:
|
||||
continue
|
||||
lang = self._search_regex(
|
||||
cc_label = cc.get('translatedCode')
|
||||
lang = cc.get('languageCode') or self._search_regex(
|
||||
r'/([a-z]{2})_', cc_url, 'lang',
|
||||
default=cc_label.split()[0] if cc_label else 'en')
|
||||
original_lang = self._search_regex(
|
||||
@ -160,7 +172,7 @@ class LecturioIE(LecturioBaseIE):
|
||||
})
|
||||
|
||||
return {
|
||||
'id': lecture_id,
|
||||
'id': lecture_id or nt,
|
||||
'title': title,
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
@ -169,37 +181,40 @@ class LecturioIE(LecturioBaseIE):
|
||||
|
||||
|
||||
class LecturioCourseIE(LecturioBaseIE):
|
||||
_VALID_URL = r'https://app\.lecturio\.com/[^/]+/(?P<id>[^/?#&]+)\.course'
|
||||
_TEST = {
|
||||
_VALID_URL = r'https://app\.lecturio\.com/(?:[^/]+/(?P<nt>[^/?#&]+)\.course|(?:#/)?course/c/(?P<id>\d+))'
|
||||
_TESTS = [{
|
||||
'url': 'https://app.lecturio.com/medical-courses/microbiology-introduction.course#/',
|
||||
'info_dict': {
|
||||
'id': 'microbiology-introduction',
|
||||
'title': 'Microbiology: Introduction',
|
||||
'description': 'md5:13da8500c25880c6016ae1e6d78c386a',
|
||||
},
|
||||
'playlist_count': 45,
|
||||
'skip': 'Requires lecturio account credentials',
|
||||
}
|
||||
}, {
|
||||
'url': 'https://app.lecturio.com/#/course/c/6434',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
|
||||
nt, course_id = re.match(self._VALID_URL, url).groups()
|
||||
display_id = nt or course_id
|
||||
api_path = 'courses/' + course_id if course_id else 'course/content/' + nt + '.json'
|
||||
course = self._download_json(
|
||||
self._API_BASE_URL + api_path, display_id)
|
||||
entries = []
|
||||
for mobj in re.finditer(
|
||||
r'(?s)<[^>]+\bdata-url=(["\'])(?:(?!\1).)+\.lecture\b[^>]+>',
|
||||
webpage):
|
||||
params = extract_attributes(mobj.group(0))
|
||||
lecture_url = urljoin(url, params.get('data-url'))
|
||||
lecture_id = params.get('data-id')
|
||||
for lecture in course.get('lectures', []):
|
||||
lecture_id = str_or_none(lecture.get('id'))
|
||||
lecture_url = lecture.get('url')
|
||||
if lecture_url:
|
||||
lecture_url = urljoin(url, lecture_url)
|
||||
else:
|
||||
lecture_url = 'https://app.lecturio.com/#/lecture/c/%s/%s' % (course_id, lecture_id)
|
||||
entries.append(self.url_result(
|
||||
lecture_url, ie=LecturioIE.ie_key(), video_id=lecture_id))
|
||||
|
||||
title = self._search_regex(
|
||||
r'<span[^>]+class=["\']content-title[^>]+>([^<]+)', webpage,
|
||||
'title', default=None)
|
||||
|
||||
return self.playlist_result(entries, display_id, title)
|
||||
return self.playlist_result(
|
||||
entries, display_id, course.get('title'),
|
||||
clean_html(course.get('description')))
|
||||
|
||||
|
||||
class LecturioDeCourseIE(LecturioBaseIE):
|
||||
|
@ -1,112 +0,0 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
qualities,
|
||||
urlencode_postdata,
|
||||
xpath_text,
|
||||
)
|
||||
|
||||
|
||||
class NFBIE(InfoExtractor):
|
||||
IE_NAME = 'nfb'
|
||||
IE_DESC = 'National Film Board of Canada'
|
||||
_VALID_URL = r'https?://(?:www\.)?(?:nfb|onf)\.ca/film/(?P<id>[\da-z_-]+)'
|
||||
|
||||
_TEST = {
|
||||
'url': 'https://www.nfb.ca/film/qallunaat_why_white_people_are_funny',
|
||||
'info_dict': {
|
||||
'id': 'qallunaat_why_white_people_are_funny',
|
||||
'ext': 'flv',
|
||||
'title': 'Qallunaat! Why White People Are Funny ',
|
||||
'description': 'md5:6b8e32dde3abf91e58857b174916620c',
|
||||
'duration': 3128,
|
||||
'creator': 'Mark Sandiford',
|
||||
'uploader': 'Mark Sandiford',
|
||||
},
|
||||
'params': {
|
||||
# rtmp download
|
||||
'skip_download': True,
|
||||
}
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
config = self._download_xml(
|
||||
'https://www.nfb.ca/film/%s/player_config' % video_id,
|
||||
video_id, 'Downloading player config XML',
|
||||
data=urlencode_postdata({'getConfig': 'true'}),
|
||||
headers={
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
'X-NFB-Referer': 'http://www.nfb.ca/medias/flash/NFBVideoPlayer.swf'
|
||||
})
|
||||
|
||||
title, description, thumbnail, duration, uploader, author = [None] * 6
|
||||
thumbnails, formats = [[]] * 2
|
||||
subtitles = {}
|
||||
|
||||
for media in config.findall('./player/stream/media'):
|
||||
if media.get('type') == 'posterImage':
|
||||
quality_key = qualities(('low', 'high'))
|
||||
thumbnails = []
|
||||
for asset in media.findall('assets/asset'):
|
||||
asset_url = xpath_text(asset, 'default/url', default=None)
|
||||
if not asset_url:
|
||||
continue
|
||||
quality = asset.get('quality')
|
||||
thumbnails.append({
|
||||
'url': asset_url,
|
||||
'id': quality,
|
||||
'preference': quality_key(quality),
|
||||
})
|
||||
elif media.get('type') == 'video':
|
||||
title = xpath_text(media, 'title', fatal=True)
|
||||
for asset in media.findall('assets/asset'):
|
||||
quality = asset.get('quality')
|
||||
height = int_or_none(self._search_regex(
|
||||
r'^(\d+)[pP]$', quality or '', 'height', default=None))
|
||||
for node in asset:
|
||||
streamer = xpath_text(node, 'streamerURI', default=None)
|
||||
if not streamer:
|
||||
continue
|
||||
play_path = xpath_text(node, 'url', default=None)
|
||||
if not play_path:
|
||||
continue
|
||||
formats.append({
|
||||
'url': streamer,
|
||||
'app': streamer.split('/', 3)[3],
|
||||
'play_path': play_path,
|
||||
'rtmp_live': False,
|
||||
'ext': 'flv',
|
||||
'format_id': '%s-%s' % (node.tag, quality) if quality else node.tag,
|
||||
'height': height,
|
||||
})
|
||||
self._sort_formats(formats)
|
||||
description = clean_html(xpath_text(media, 'description'))
|
||||
uploader = xpath_text(media, 'author')
|
||||
duration = int_or_none(media.get('duration'))
|
||||
for subtitle in media.findall('./subtitles/subtitle'):
|
||||
subtitle_url = xpath_text(subtitle, 'url', default=None)
|
||||
if not subtitle_url:
|
||||
continue
|
||||
lang = xpath_text(subtitle, 'lang', default='en')
|
||||
subtitles.setdefault(lang, []).append({
|
||||
'url': subtitle_url,
|
||||
'ext': (subtitle.get('format') or determine_ext(subtitle_url)).lower(),
|
||||
})
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'description': description,
|
||||
'thumbnails': thumbnails,
|
||||
'duration': duration,
|
||||
'creator': uploader,
|
||||
'uploader': uploader,
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
}
|
File diff suppressed because it is too large
Load Diff
@ -5,26 +5,27 @@ import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import (
|
||||
compat_str,
|
||||
# compat_str,
|
||||
compat_HTTPError,
|
||||
)
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
ExtractorError,
|
||||
remove_end,
|
||||
# remove_end,
|
||||
str_or_none,
|
||||
strip_or_none,
|
||||
unified_timestamp,
|
||||
urljoin,
|
||||
# urljoin,
|
||||
)
|
||||
|
||||
|
||||
class PacktPubBaseIE(InfoExtractor):
|
||||
_PACKT_BASE = 'https://www.packtpub.com'
|
||||
_MAPT_REST = '%s/mapt-rest' % _PACKT_BASE
|
||||
# _PACKT_BASE = 'https://www.packtpub.com'
|
||||
_STATIC_PRODUCTS_BASE = 'https://static.packt-cdn.com/products/'
|
||||
|
||||
|
||||
class PacktPubIE(PacktPubBaseIE):
|
||||
_VALID_URL = r'https?://(?:(?:www\.)?packtpub\.com/mapt|subscription\.packtpub\.com)/video/[^/]+/(?P<course_id>\d+)/(?P<chapter_id>\d+)/(?P<id>\d+)'
|
||||
_VALID_URL = r'https?://(?:(?:www\.)?packtpub\.com/mapt|subscription\.packtpub\.com)/video/[^/]+/(?P<course_id>\d+)/(?P<chapter_id>[^/]+)/(?P<id>[^/]+)(?:/(?P<display_id>[^/?&#]+))?'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://www.packtpub.com/mapt/video/web-development/9781787122215/20528/20530/Project+Intro',
|
||||
@ -40,6 +41,9 @@ class PacktPubIE(PacktPubBaseIE):
|
||||
}, {
|
||||
'url': 'https://subscription.packtpub.com/video/web_development/9781787122215/20528/20530/project-intro',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://subscription.packtpub.com/video/programming/9781838988906/p1/video1_1/business-card-project',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_NETRC_MACHINE = 'packtpub'
|
||||
_TOKEN = None
|
||||
@ -50,9 +54,9 @@ class PacktPubIE(PacktPubBaseIE):
|
||||
return
|
||||
try:
|
||||
self._TOKEN = self._download_json(
|
||||
self._MAPT_REST + '/users/tokens', None,
|
||||
'https://services.packtpub.com/auth-v1/users/tokens', None,
|
||||
'Downloading Authorization Token', data=json.dumps({
|
||||
'email': username,
|
||||
'username': username,
|
||||
'password': password,
|
||||
}).encode())['data']['access']
|
||||
except ExtractorError as e:
|
||||
@ -61,54 +65,40 @@ class PacktPubIE(PacktPubBaseIE):
|
||||
raise ExtractorError(message, expected=True)
|
||||
raise
|
||||
|
||||
def _handle_error(self, response):
|
||||
if response.get('status') != 'success':
|
||||
raise ExtractorError(
|
||||
'% said: %s' % (self.IE_NAME, response['message']),
|
||||
expected=True)
|
||||
|
||||
def _download_json(self, *args, **kwargs):
|
||||
response = super(PacktPubIE, self)._download_json(*args, **kwargs)
|
||||
self._handle_error(response)
|
||||
return response
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
course_id, chapter_id, video_id = mobj.group(
|
||||
'course_id', 'chapter_id', 'id')
|
||||
course_id, chapter_id, video_id, display_id = re.match(self._VALID_URL, url).groups()
|
||||
|
||||
headers = {}
|
||||
if self._TOKEN:
|
||||
headers['Authorization'] = 'Bearer ' + self._TOKEN
|
||||
video = self._download_json(
|
||||
'%s/users/me/products/%s/chapters/%s/sections/%s'
|
||||
% (self._MAPT_REST, course_id, chapter_id, video_id), video_id,
|
||||
'Downloading JSON video', headers=headers)['data']
|
||||
try:
|
||||
video_url = self._download_json(
|
||||
'https://services.packtpub.com/products-v1/products/%s/%s/%s' % (course_id, chapter_id, video_id), video_id,
|
||||
'Downloading JSON video', headers=headers)['data']
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 400:
|
||||
self.raise_login_required('This video is locked')
|
||||
raise
|
||||
|
||||
content = video.get('content')
|
||||
if not content:
|
||||
self.raise_login_required('This video is locked')
|
||||
# TODO: find a better way to avoid duplicating course requests
|
||||
# metadata = self._download_json(
|
||||
# '%s/products/%s/chapters/%s/sections/%s/metadata'
|
||||
# % (self._MAPT_REST, course_id, chapter_id, video_id),
|
||||
# video_id)['data']
|
||||
|
||||
video_url = content['file']
|
||||
|
||||
metadata = self._download_json(
|
||||
'%s/products/%s/chapters/%s/sections/%s/metadata'
|
||||
% (self._MAPT_REST, course_id, chapter_id, video_id),
|
||||
video_id)['data']
|
||||
|
||||
title = metadata['pageTitle']
|
||||
course_title = metadata.get('title')
|
||||
if course_title:
|
||||
title = remove_end(title, ' - %s' % course_title)
|
||||
timestamp = unified_timestamp(metadata.get('publicationDate'))
|
||||
thumbnail = urljoin(self._PACKT_BASE, metadata.get('filepath'))
|
||||
# title = metadata['pageTitle']
|
||||
# course_title = metadata.get('title')
|
||||
# if course_title:
|
||||
# title = remove_end(title, ' - %s' % course_title)
|
||||
# timestamp = unified_timestamp(metadata.get('publicationDate'))
|
||||
# thumbnail = urljoin(self._PACKT_BASE, metadata.get('filepath'))
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'url': video_url,
|
||||
'title': title,
|
||||
'thumbnail': thumbnail,
|
||||
'timestamp': timestamp,
|
||||
'title': display_id or video_id, # title,
|
||||
# 'thumbnail': thumbnail,
|
||||
# 'timestamp': timestamp,
|
||||
}
|
||||
|
||||
|
||||
@ -119,6 +109,7 @@ class PacktPubCourseIE(PacktPubBaseIE):
|
||||
'info_dict': {
|
||||
'id': '9781787122215',
|
||||
'title': 'Learn Nodejs by building 12 projects [Video]',
|
||||
'description': 'md5:489da8d953f416e51927b60a1c7db0aa',
|
||||
},
|
||||
'playlist_count': 90,
|
||||
}, {
|
||||
@ -136,35 +127,38 @@ class PacktPubCourseIE(PacktPubBaseIE):
|
||||
url, course_id = mobj.group('url', 'id')
|
||||
|
||||
course = self._download_json(
|
||||
'%s/products/%s/metadata' % (self._MAPT_REST, course_id),
|
||||
course_id)['data']
|
||||
self._STATIC_PRODUCTS_BASE + '%s/toc' % course_id, course_id)
|
||||
metadata = self._download_json(
|
||||
self._STATIC_PRODUCTS_BASE + '%s/summary' % course_id,
|
||||
course_id, fatal=False) or {}
|
||||
|
||||
entries = []
|
||||
for chapter_num, chapter in enumerate(course['tableOfContents'], 1):
|
||||
if chapter.get('type') != 'chapter':
|
||||
continue
|
||||
children = chapter.get('children')
|
||||
if not isinstance(children, list):
|
||||
for chapter_num, chapter in enumerate(course['chapters'], 1):
|
||||
chapter_id = str_or_none(chapter.get('id'))
|
||||
sections = chapter.get('sections')
|
||||
if not chapter_id or not isinstance(sections, list):
|
||||
continue
|
||||
chapter_info = {
|
||||
'chapter': chapter.get('title'),
|
||||
'chapter_number': chapter_num,
|
||||
'chapter_id': chapter.get('id'),
|
||||
'chapter_id': chapter_id,
|
||||
}
|
||||
for section in children:
|
||||
if section.get('type') != 'section':
|
||||
continue
|
||||
section_url = section.get('seoUrl')
|
||||
if not isinstance(section_url, compat_str):
|
||||
for section in sections:
|
||||
section_id = str_or_none(section.get('id'))
|
||||
if not section_id or section.get('contentType') != 'video':
|
||||
continue
|
||||
entry = {
|
||||
'_type': 'url_transparent',
|
||||
'url': urljoin(url + '/', section_url),
|
||||
'url': '/'.join([url, chapter_id, section_id]),
|
||||
'title': strip_or_none(section.get('title')),
|
||||
'description': clean_html(section.get('summary')),
|
||||
'thumbnail': metadata.get('coverImage'),
|
||||
'timestamp': unified_timestamp(metadata.get('publicationDate')),
|
||||
'ie_key': PacktPubIE.ie_key(),
|
||||
}
|
||||
entry.update(chapter_info)
|
||||
entries.append(entry)
|
||||
|
||||
return self.playlist_result(entries, course_id, course.get('title'))
|
||||
return self.playlist_result(
|
||||
entries, course_id, metadata.get('title'),
|
||||
clean_html(metadata.get('about')))
|
||||
|
@ -168,7 +168,7 @@ class PeerTubeIE(InfoExtractor):
|
||||
@staticmethod
|
||||
def _extract_peertube_url(webpage, source_url):
|
||||
mobj = re.match(
|
||||
r'https?://(?P<host>[^/]+)/videos/watch/(?P<id>%s)'
|
||||
r'https?://(?P<host>[^/]+)/videos/(?:watch|embed)/(?P<id>%s)'
|
||||
% PeerTubeIE._UUID_RE, source_url)
|
||||
if mobj and any(p in webpage for p in (
|
||||
'<title>PeerTube<',
|
||||
|
@ -14,7 +14,7 @@ class PhilharmonieDeParisIE(InfoExtractor):
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://
|
||||
(?:
|
||||
live\.philharmoniedeparis\.fr/(?:[Cc]oncert/|misc/Playlist\.ashx\?id=)|
|
||||
live\.philharmoniedeparis\.fr/(?:[Cc]oncert/|embed(?:app)?/|misc/Playlist\.ashx\?id=)|
|
||||
pad\.philharmoniedeparis\.fr/doc/CIMU/
|
||||
)
|
||||
(?P<id>\d+)
|
||||
@ -40,6 +40,12 @@ class PhilharmonieDeParisIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'http://live.philharmoniedeparis.fr/misc/Playlist.ashx?id=1030324&track=&lang=fr',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://live.philharmoniedeparis.fr/embedapp/1098406/berlioz-fantastique-lelio-les-siecles-national-youth-choir-of.html?lang=fr-FR',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://live.philharmoniedeparis.fr/embed/1098406/berlioz-fantastique-lelio-les-siecles-national-youth-choir-of.html?lang=fr-FR',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_LIVE_URL = 'https://live.philharmoniedeparis.fr'
|
||||
|
||||
|
@ -372,37 +372,92 @@ class PornHubPlaylistBaseIE(PornHubBaseIE):
|
||||
entries, playlist_id, title, playlist.get('description'))
|
||||
|
||||
|
||||
class PornHubPlaylistIE(PornHubPlaylistBaseIE):
|
||||
_VALID_URL = r'https?://(?:[^/]+\.)?(?P<host>pornhub\.(?:com|net))/playlist/(?P<id>\d+)'
|
||||
class PornHubUserIE(PornHubPlaylistBaseIE):
|
||||
_VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?pornhub\.(?:com|net)/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/?#&]+))(?:[?#&]|/(?!videos)|$)'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.pornhub.com/playlist/4667351',
|
||||
'info_dict': {
|
||||
'id': '4667351',
|
||||
'title': 'Nataly Hot',
|
||||
},
|
||||
'playlist_mincount': 2,
|
||||
'url': 'https://www.pornhub.com/model/zoe_ph',
|
||||
'playlist_mincount': 118,
|
||||
}, {
|
||||
'url': 'https://de.pornhub.com/playlist/4667351',
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious',
|
||||
'info_dict': {
|
||||
'id': 'liz-vicious',
|
||||
},
|
||||
'playlist_mincount': 118,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/users/russianveet69',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/channels/povd',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/model/zoe_ph?abc=1',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
user_id = mobj.group('id')
|
||||
return self.url_result(
|
||||
'%s/videos' % mobj.group('url'), ie=PornHubPagedVideoListIE.ie_key(),
|
||||
video_id=user_id)
|
||||
|
||||
class PornHubUserVideosIE(PornHubPlaylistBaseIE):
|
||||
_VALID_URL = r'https?://(?:[^/]+\.)?(?P<host>pornhub\.(?:com|net))/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/]+)/videos'
|
||||
|
||||
class PornHubPagedPlaylistBaseIE(PornHubPlaylistBaseIE):
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
host = mobj.group('host')
|
||||
item_id = mobj.group('id')
|
||||
|
||||
page = int_or_none(self._search_regex(
|
||||
r'\bpage=(\d+)', url, 'page', default=None))
|
||||
|
||||
page_url = self._make_page_url(url)
|
||||
|
||||
entries = []
|
||||
for page_num in (page, ) if page is not None else itertools.count(1):
|
||||
try:
|
||||
webpage = self._download_webpage(
|
||||
page_url, item_id, 'Downloading page %d' % page_num,
|
||||
query={'page': page_num})
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404:
|
||||
break
|
||||
raise
|
||||
page_entries = self._extract_entries(webpage, host)
|
||||
if not page_entries:
|
||||
break
|
||||
entries.extend(page_entries)
|
||||
if not self._has_more(webpage):
|
||||
break
|
||||
|
||||
return self.playlist_result(orderedSet(entries), item_id)
|
||||
|
||||
|
||||
class PornHubPagedVideoListIE(PornHubPagedPlaylistBaseIE):
|
||||
_VALID_URL = r'https?://(?:[^/]+\.)?(?P<host>pornhub\.(?:com|net))/(?P<id>(?:[^/]+/)*[^/?#&]+)'
|
||||
_TESTS = [{
|
||||
'url': 'http://www.pornhub.com/users/zoe_ph/videos/public',
|
||||
'info_dict': {
|
||||
'id': 'zoe_ph',
|
||||
},
|
||||
'playlist_mincount': 171,
|
||||
'url': 'https://www.pornhub.com/model/zoe_ph/videos',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.pornhub.com/users/rushandlia/videos',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos',
|
||||
'info_dict': {
|
||||
'id': 'pornstar/jenny-blighe/videos',
|
||||
},
|
||||
'playlist_mincount': 149,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos?page=3',
|
||||
'info_dict': {
|
||||
'id': 'pornstar/jenny-blighe/videos',
|
||||
},
|
||||
'playlist_mincount': 40,
|
||||
}, {
|
||||
# default sorting as Top Rated Videos
|
||||
'url': 'https://www.pornhub.com/channels/povd/videos',
|
||||
'info_dict': {
|
||||
'id': 'povd',
|
||||
'id': 'channels/povd/videos',
|
||||
},
|
||||
'playlist_mincount': 293,
|
||||
}, {
|
||||
@ -421,31 +476,107 @@ class PornHubUserVideosIE(PornHubPlaylistBaseIE):
|
||||
'url': 'http://www.pornhub.com/users/zoe_ph/videos/public',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/model/jayndrea/videos/upload',
|
||||
# Most Viewed Videos
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=mv',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos/upload',
|
||||
# Top Rated Videos
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=tr',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# Longest Videos
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=lg',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# Newest Videos
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=cm',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos/paid',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos/fanonly',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/video',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/video?page=3',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/video/search?search=123',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/categories/teen',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/categories/teen?page=3',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/hd',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/hd?page=3',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/described-video',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/described-video?page=2',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/video/incategories/60fps-1/hd-porn',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/playlist/44121572',
|
||||
'info_dict': {
|
||||
'id': 'playlist/44121572',
|
||||
},
|
||||
'playlist_mincount': 132,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/playlist/4667351',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://de.pornhub.com/playlist/4667351',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
@classmethod
|
||||
def suitable(cls, url):
|
||||
return (False
|
||||
if PornHubIE.suitable(url) or PornHubUserIE.suitable(url) or PornHubUserVideosUploadIE.suitable(url)
|
||||
else super(PornHubPagedVideoListIE, cls).suitable(url))
|
||||
|
||||
def _make_page_url(self, url):
|
||||
return url
|
||||
|
||||
@staticmethod
|
||||
def _has_more(webpage):
|
||||
return re.search(
|
||||
r'''(?x)
|
||||
<li[^>]+\bclass=["\']page_next|
|
||||
<link[^>]+\brel=["\']next|
|
||||
<button[^>]+\bid=["\']moreDataBtn
|
||||
''', webpage) is not None
|
||||
|
||||
|
||||
class PornHubUserVideosUploadIE(PornHubPagedPlaylistBaseIE):
|
||||
_VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?(?P<host>pornhub\.(?:com|net))/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/]+)/videos/upload)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos/upload',
|
||||
'info_dict': {
|
||||
'id': 'jenny-blighe',
|
||||
},
|
||||
'playlist_mincount': 129,
|
||||
}, {
|
||||
'url': 'https://www.pornhub.com/model/zoe_ph/videos/upload',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _make_page_url(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
host = mobj.group('host')
|
||||
user_id = mobj.group('id')
|
||||
return '%s/ajax' % mobj.group('url')
|
||||
|
||||
entries = []
|
||||
for page_num in itertools.count(1):
|
||||
try:
|
||||
webpage = self._download_webpage(
|
||||
url, user_id, 'Downloading page %d' % page_num,
|
||||
query={'page': page_num})
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404:
|
||||
break
|
||||
raise
|
||||
page_entries = self._extract_entries(webpage, host)
|
||||
if not page_entries:
|
||||
break
|
||||
entries.extend(page_entries)
|
||||
|
||||
return self.playlist_result(entries, user_id)
|
||||
@staticmethod
|
||||
def _has_more(webpage):
|
||||
return True
|
||||
|
@ -19,7 +19,7 @@ from ..utils import (
|
||||
|
||||
class SixPlayIE(InfoExtractor):
|
||||
IE_NAME = '6play'
|
||||
_VALID_URL = r'(?:6play:|https?://(?:www\.)?(?P<domain>6play\.fr|rtlplay\.be|play\.rtl\.hr)/.+?-c_)(?P<id>[0-9]+)'
|
||||
_VALID_URL = r'(?:6play:|https?://(?:www\.)?(?P<domain>6play\.fr|rtlplay\.be|play\.rtl\.hr|rtlmost\.hu)/.+?-c_)(?P<id>[0-9]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.6play.fr/minute-par-minute-p_9533/le-but-qui-a-marque-lhistoire-du-football-francais-c_12041051',
|
||||
'md5': '31fcd112637baa0c2ab92c4fcd8baf27',
|
||||
@ -35,6 +35,9 @@ class SixPlayIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'https://play.rtl.hr/pj-masks-p_9455/epizoda-34-sezona-1-catboyevo-cudo-na-dva-kotaca-c_11984989',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.rtlmost.hu/megtorve-p_14167/megtorve-6-resz-c_12397787',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
@ -43,6 +46,7 @@ class SixPlayIE(InfoExtractor):
|
||||
'6play.fr': ('6play', 'm6web'),
|
||||
'rtlplay.be': ('rtlbe_rtl_play', 'rtlbe'),
|
||||
'play.rtl.hr': ('rtlhr_rtl_play', 'rtlhr'),
|
||||
'rtlmost.hu': ('rtlhu_rtl_most', 'rtlhu'),
|
||||
}.get(domain, ('6play', 'm6web'))
|
||||
|
||||
data = self._download_json(
|
||||
|
@ -221,7 +221,7 @@ class SoundcloudIE(InfoExtractor):
|
||||
}
|
||||
]
|
||||
|
||||
_CLIENT_ID = 'FweeGBOOEOYJWLJN3oEyToGLKhmSz0I7'
|
||||
_CLIENT_ID = 'BeGVhOrGmfboy1LtiHTQF6Ejpt9ULJCI'
|
||||
|
||||
@staticmethod
|
||||
def _extract_urls(webpage):
|
||||
|
@ -133,7 +133,7 @@ class TEDIE(InfoExtractor):
|
||||
|
||||
def _extract_info(self, webpage):
|
||||
info_json = self._search_regex(
|
||||
r'(?s)q\(\s*"\w+.init"\s*,\s*({.+})\)\s*</script>',
|
||||
r'(?s)q\(\s*"\w+.init"\s*,\s*({.+?})\)\s*</script>',
|
||||
webpage, 'info json')
|
||||
return json.loads(info_json)
|
||||
|
||||
|
@ -2,6 +2,7 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..compat import compat_str
|
||||
|
||||
|
||||
class TF1IE(InfoExtractor):
|
||||
@ -43,12 +44,49 @@ class TF1IE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'http://www.tf1.fr/hd1/documentaire/videos/mylene-farmer-d-une-icone.html',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.tf1.fr/tmc/quotidien-avec-yann-barthes/videos/quotidien-premiere-partie-11-juin-2019.html',
|
||||
'info_dict': {
|
||||
'id': '13641379',
|
||||
'ext': 'mp4',
|
||||
'title': 'md5:f392bc52245dc5ad43771650c96fb620',
|
||||
'description': 'md5:44bc54f0a21322f5b91d68e76a544eae',
|
||||
'upload_date': '20190611',
|
||||
},
|
||||
'params': {
|
||||
# Sometimes wat serves the whole file with the --test option
|
||||
'skip_download': True,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
wat_id = self._html_search_regex(
|
||||
r'(["\'])(?:https?:)?//www\.wat\.tv/embedframe/.*?(?P<id>\d{8})\1',
|
||||
webpage, 'wat id', group='id')
|
||||
|
||||
wat_id = None
|
||||
|
||||
data = self._parse_json(
|
||||
self._search_regex(
|
||||
r'__APOLLO_STATE__\s*=\s*({.+?})\s*(?:;|</script>)', webpage,
|
||||
'data', default='{}'), video_id, fatal=False)
|
||||
|
||||
if data:
|
||||
try:
|
||||
wat_id = next(
|
||||
video.get('streamId')
|
||||
for key, video in data.items()
|
||||
if isinstance(video, dict)
|
||||
and video.get('slug') == video_id)
|
||||
if not isinstance(wat_id, compat_str) or not wat_id.isdigit():
|
||||
wat_id = None
|
||||
except StopIteration:
|
||||
pass
|
||||
|
||||
if not wat_id:
|
||||
wat_id = self._html_search_regex(
|
||||
(r'(["\'])(?:https?:)?//www\.wat\.tv/embedframe/.*?(?P<id>\d{8})\1',
|
||||
r'(["\']?)streamId\1\s*:\s*(["\']?)(?P<id>\d+)\2'),
|
||||
webpage, 'wat id', group='id')
|
||||
|
||||
return self.url_result('wat:%s' % wat_id, 'Wat')
|
||||
|
@ -38,7 +38,7 @@ class TouTvIE(RadioCanadaIE):
|
||||
'url': 'https://ici.tou.tv/l-age-adulte/S01C501',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_CLIENT_KEY = '4dd36440-09d5-4468-8923-b6d91174ad36'
|
||||
_CLIENT_KEY = '90505c8d-9c34-4f34-8da1-3a85bdc6d4f4'
|
||||
|
||||
def _real_initialize(self):
|
||||
email, password = self._get_login_info()
|
||||
|
@ -1,32 +1,35 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .mtv import MTVServicesInfoExtractor
|
||||
from .spike import ParamountNetworkIE
|
||||
|
||||
|
||||
class TVLandIE(MTVServicesInfoExtractor):
|
||||
class TVLandIE(ParamountNetworkIE):
|
||||
IE_NAME = 'tvland.com'
|
||||
_VALID_URL = r'https?://(?:www\.)?tvland\.com/(?:video-clips|(?:full-)?episodes)/(?P<id>[^/?#.]+)'
|
||||
_FEED_URL = 'http://www.tvland.com/feeds/mrss/'
|
||||
_TESTS = [{
|
||||
# Geo-restricted. Without a proxy metadata are still there. With a
|
||||
# proxy it redirects to http://m.tvland.com/app/
|
||||
'url': 'http://www.tvland.com/episodes/hqhps2/everybody-loves-raymond-the-invasion-ep-048',
|
||||
'url': 'https://www.tvland.com/episodes/s04pzf/everybody-loves-raymond-the-dog-season-1-ep-19',
|
||||
'info_dict': {
|
||||
'description': 'md5:80973e81b916a324e05c14a3fb506d29',
|
||||
'title': 'The Invasion',
|
||||
'description': 'md5:84928e7a8ad6649371fbf5da5e1ad75a',
|
||||
'title': 'The Dog',
|
||||
},
|
||||
'playlist': [],
|
||||
'playlist_mincount': 5,
|
||||
}, {
|
||||
'url': 'http://www.tvland.com/video-clips/zea2ev/younger-younger--hilary-duff---little-lies',
|
||||
'url': 'https://www.tvland.com/video-clips/4n87f2/younger-a-first-look-at-younger-season-6',
|
||||
'md5': 'e2c6389401cf485df26c79c247b08713',
|
||||
'info_dict': {
|
||||
'id': 'b8697515-4bbe-4e01-83d5-fa705ce5fa88',
|
||||
'id': '891f7d3c-5b5b-4753-b879-b7ba1a601757',
|
||||
'ext': 'mp4',
|
||||
'title': 'Younger|December 28, 2015|2|NO-EPISODE#|Younger: Hilary Duff - Little Lies',
|
||||
'description': 'md5:7d192f56ca8d958645c83f0de8ef0269',
|
||||
'upload_date': '20151228',
|
||||
'timestamp': 1451289600,
|
||||
'title': 'Younger|April 30, 2019|6|NO-EPISODE#|A First Look at Younger Season 6',
|
||||
'description': 'md5:595ea74578d3a888ae878dfd1c7d4ab2',
|
||||
'upload_date': '20190430',
|
||||
'timestamp': 1556658000,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.tvland.com/full-episodes/iu0hz6/younger-a-kiss-is-just-a-kiss-season-3-ep-301',
|
||||
|
@ -317,7 +317,7 @@ class TwitchVodIE(TwitchItemBaseIE):
|
||||
'Downloading %s access token' % self._ITEM_TYPE)
|
||||
|
||||
formats = self._extract_m3u8_formats(
|
||||
'%s/vod/%s?%s' % (
|
||||
'%s/vod/%s.m3u8?%s' % (
|
||||
self._USHER_BASE, item_id,
|
||||
compat_urllib_parse_urlencode({
|
||||
'allow_source': 'true',
|
||||
|
@ -34,6 +34,7 @@ class VevoIE(VevoBaseIE):
|
||||
(?:https?://(?:www\.)?vevo\.com/watch/(?!playlist|genre)(?:[^/]+/(?:[^/]+/)?)?|
|
||||
https?://cache\.vevo\.com/m/html/embed\.html\?video=|
|
||||
https?://videoplayer\.vevo\.com/embed/embedded\?videoId=|
|
||||
https?://embed\.vevo\.com/.*?[?&]isrc=|
|
||||
vevo:)
|
||||
(?P<id>[^&?#]+)'''
|
||||
|
||||
@ -144,6 +145,9 @@ class VevoIE(VevoBaseIE):
|
||||
# Geo-restricted to Netherlands/Germany
|
||||
'url': 'http://www.vevo.com/watch/boostee/pop-corn-clip-officiel/FR1A91600909',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://embed.vevo.com/?isrc=USH5V1923499&partnerId=4d61b777-8023-4191-9ede-497ed6c24647&partnerAdCode=',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_VERSIONS = {
|
||||
0: 'youtube', # only in AuthenticateVideo videoVersions
|
||||
|
@ -16,7 +16,6 @@ from ..utils import (
|
||||
determine_ext,
|
||||
ExtractorError,
|
||||
js_to_json,
|
||||
InAdvancePagedList,
|
||||
int_or_none,
|
||||
merge_dicts,
|
||||
NO_DEFAULT,
|
||||
@ -814,7 +813,8 @@ class VimeoChannelIE(VimeoBaseInfoExtractor):
|
||||
return '%s/videos/page:%d/' % (base_url, pagenum)
|
||||
|
||||
def _extract_list_title(self, webpage):
|
||||
return self._TITLE or self._html_search_regex(self._TITLE_RE, webpage, 'list title')
|
||||
return self._TITLE or self._html_search_regex(
|
||||
self._TITLE_RE, webpage, 'list title', fatal=False)
|
||||
|
||||
def _login_list_password(self, page_url, list_id, webpage):
|
||||
login_form = self._search_regex(
|
||||
@ -955,7 +955,7 @@ class VimeoGroupsIE(VimeoAlbumIE):
|
||||
}]
|
||||
|
||||
def _extract_list_title(self, webpage):
|
||||
return self._og_search_title(webpage)
|
||||
return self._og_search_title(webpage, fatal=False)
|
||||
|
||||
def _real_extract(self, url):
|
||||
mobj = re.match(self._VALID_URL, url)
|
||||
@ -1065,7 +1065,7 @@ class VimeoWatchLaterIE(VimeoChannelIE):
|
||||
return self._extract_videos('watchlater', 'https://vimeo.com/watchlater')
|
||||
|
||||
|
||||
class VimeoLikesIE(InfoExtractor):
|
||||
class VimeoLikesIE(VimeoChannelIE):
|
||||
_VALID_URL = r'https://(?:www\.)?vimeo\.com/(?P<id>[^/]+)/likes/?(?:$|[?#]|sort:)'
|
||||
IE_NAME = 'vimeo:likes'
|
||||
IE_DESC = 'Vimeo user likes'
|
||||
@ -1073,55 +1073,20 @@ class VimeoLikesIE(InfoExtractor):
|
||||
'url': 'https://vimeo.com/user755559/likes/',
|
||||
'playlist_mincount': 293,
|
||||
'info_dict': {
|
||||
'id': 'user755559_likes',
|
||||
'description': 'See all the videos urza likes',
|
||||
'title': 'Videos urza likes',
|
||||
'id': 'user755559',
|
||||
'title': 'urza’s Likes',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://vimeo.com/stormlapse/likes',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _page_url(self, base_url, pagenum):
|
||||
return '%s/page:%d/' % (base_url, pagenum)
|
||||
|
||||
def _real_extract(self, url):
|
||||
user_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, user_id)
|
||||
page_count = self._int(
|
||||
self._search_regex(
|
||||
r'''(?x)<li><a\s+href="[^"]+"\s+data-page="([0-9]+)">
|
||||
.*?</a></li>\s*<li\s+class="pagination_next">
|
||||
''', webpage, 'page count', default=1),
|
||||
'page count', fatal=True)
|
||||
PAGE_SIZE = 12
|
||||
title = self._html_search_regex(
|
||||
r'(?s)<h1>(.+?)</h1>', webpage, 'title', fatal=False)
|
||||
description = self._html_search_meta('description', webpage)
|
||||
|
||||
def _get_page(idx):
|
||||
page_url = 'https://vimeo.com/%s/likes/page:%d/sort:date' % (
|
||||
user_id, idx + 1)
|
||||
webpage = self._download_webpage(
|
||||
page_url, user_id,
|
||||
note='Downloading page %d/%d' % (idx + 1, page_count))
|
||||
video_list = self._search_regex(
|
||||
r'(?s)<ol class="js-browse_list[^"]+"[^>]*>(.*?)</ol>',
|
||||
webpage, 'video content')
|
||||
paths = re.findall(
|
||||
r'<li[^>]*>\s*<a\s+href="([^"]+)"', video_list)
|
||||
for path in paths:
|
||||
yield {
|
||||
'_type': 'url',
|
||||
'url': compat_urlparse.urljoin(page_url, path),
|
||||
}
|
||||
|
||||
pl = InAdvancePagedList(_get_page, page_count, PAGE_SIZE)
|
||||
|
||||
return {
|
||||
'_type': 'playlist',
|
||||
'id': '%s_likes' % user_id,
|
||||
'title': title,
|
||||
'description': description,
|
||||
'entries': pl,
|
||||
}
|
||||
return self._extract_videos(user_id, 'https://vimeo.com/%s/likes' % user_id)
|
||||
|
||||
|
||||
class VHXEmbedIE(InfoExtractor):
|
||||
|
@ -32,6 +32,10 @@ class VzaarIE(InfoExtractor):
|
||||
'ext': 'mp3',
|
||||
'title': 'MP3',
|
||||
},
|
||||
}, {
|
||||
# with null videoTitle
|
||||
'url': 'https://view.vzaar.com/20313539/download',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
@ -45,7 +49,7 @@ class VzaarIE(InfoExtractor):
|
||||
video_data = self._download_json(
|
||||
'http://view.vzaar.com/v2/%s/video' % video_id, video_id)
|
||||
|
||||
title = video_data['videoTitle']
|
||||
title = video_data.get('videoTitle') or video_id
|
||||
|
||||
formats = []
|
||||
|
||||
|
@ -7,7 +7,7 @@ from ..utils import int_or_none
|
||||
|
||||
|
||||
class XiamiBaseIE(InfoExtractor):
|
||||
_API_BASE_URL = 'http://www.xiami.com/song/playlist/cat/json/id'
|
||||
_API_BASE_URL = 'https://emumo.xiami.com/song/playlist/cat/json/id'
|
||||
|
||||
def _download_webpage_handle(self, *args, **kwargs):
|
||||
webpage = super(XiamiBaseIE, self)._download_webpage_handle(*args, **kwargs)
|
||||
|
@ -37,7 +37,7 @@ class YourPornIE(InfoExtractor):
|
||||
self._search_regex(
|
||||
r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
|
||||
group='data'),
|
||||
video_id)[video_id]).replace('/cdn/', '/cdn4/')
|
||||
video_id)[video_id]).replace('/cdn/', '/cdn5/')
|
||||
|
||||
title = (self._search_regex(
|
||||
r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
|
||||
|
@ -500,6 +500,12 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
||||
|
||||
# RTMP (unnamed)
|
||||
'_rtmp': {'protocol': 'rtmp'},
|
||||
|
||||
# av01 video only formats sometimes served with "unknown" codecs
|
||||
'394': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
|
||||
'395': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
|
||||
'396': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
|
||||
'397': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
|
||||
}
|
||||
_SUBTITLE_FORMATS = ('srv1', 'srv2', 'srv3', 'ttml', 'vtt')
|
||||
|
||||
@ -1306,11 +1312,18 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
||||
|
||||
def _parse_sig_js(self, jscode):
|
||||
funcname = self._search_regex(
|
||||
(r'(["\'])signature\1\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
(r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'(?P<sig>[a-zA-Z0-9$]+)\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)',
|
||||
# Obsolete patterns
|
||||
r'(["\'])signature\1\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\.sig\|\|(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'yt\.akamaized\.net/\)\s*\|\|\s*.*?\s*c\s*&&\s*d\.set\([^,]+\s*,\s*(?:encodeURIComponent\s*\()?(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bc\s*&&\s*d\.set\([^,]+\s*,\s*(?:encodeURIComponent\s*\()?\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bc\s*&&\s*d\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\('),
|
||||
r'yt\.akamaized\.net/\)\s*\|\|\s*.*?\s*[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?:encodeURIComponent\s*\()?\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bc\s*&&\s*a\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bc\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(',
|
||||
r'\bc\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\('),
|
||||
jscode, 'Initial JS player signature function name', group='sig')
|
||||
|
||||
jsi = JSInterpreter(jscode)
|
||||
@ -1575,8 +1588,15 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
||||
return video_id
|
||||
|
||||
def _extract_annotations(self, video_id):
|
||||
url = 'https://www.youtube.com/annotations_invideo?features=1&legacy=1&video_id=%s' % video_id
|
||||
return self._download_webpage(url, video_id, note='Searching for annotations.', errnote='Unable to download video annotations.')
|
||||
return self._download_webpage(
|
||||
'https://www.youtube.com/annotations_invideo', video_id,
|
||||
note='Downloading annotations',
|
||||
errnote='Unable to download video annotations', fatal=False,
|
||||
query={
|
||||
'features': 1,
|
||||
'legacy': 1,
|
||||
'video_id': video_id,
|
||||
})
|
||||
|
||||
@staticmethod
|
||||
def _extract_chapters(description, duration):
|
||||
|
1597
youtube_dl/utils.py
1597
youtube_dl/utils.py
File diff suppressed because it is too large
Load Diff
@ -1,3 +1,3 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__version__ = '2019.06.08'
|
||||
__version__ = '2019.07.02'
|
||||
|
Loading…
Reference in New Issue
Block a user