1
0
mirror of https://codeberg.org/polarisfm/youtube-dl synced 2024-11-24 01:14:32 +01:00
youtube-dl/youtube_dl/extractor/crunchyroll.py
Aakash Gajjar b827ee921f
pull changes from remote master (#190)
* [scrippsnetworks] Add new extractor(closes #19857)(closes #22981)

* [teachable] Improve locked lessons detection (#23528)

* [teachable] Fail with error message if no video URL found

* [extractors] add missing import for ScrippsNetworksIE

* [brightcove] cache brightcove player policy keys

* [prosiebensat1] improve geo restriction handling(closes #23571)

* [soundcloud] automatically update client id on failing requests

* [spankbang] Fix extraction (closes #23307, closes #23423, closes #23444)

* [spankbang] Improve removed video detection (#23423)

* [brightcove] update policy key on failing requests

* [pornhub] Fix extraction and add support for m3u8 formats (closes #22749, closes #23082)

* [pornhub] Improve locked videos detection (closes #22449, closes #22780)

* [brightcove] invalidate policy key cache on failing requests

* [soundcloud] fix client id extraction for non fatal requests

* [ChangeLog] Actualize
[ci skip]

* [devscripts/create-github-release] Switch to using PAT for authentication

Basic authentication will be deprecated soon

* release 2020.01.01

* [redtube] Detect private videos (#23518)

* [vice] improve extraction(closes #23631)

* [devscripts/create-github-release] Remove unused import

* [wistia] improve format extraction and extract subtitles(closes #22590)

* [nrktv:seriebase] Fix extraction (closes #23625) (#23537)

* [discovery] fix anonymous token extraction(closes #23650)

* [scrippsnetworks] add support for www.discovery.com videos

* [scrippsnetworks] correct test case URL

* [dctp] fix format extraction(closes #23656)

* [pandatv] Remove extractor (#23630)

* [naver] improve extraction

- improve geo-restriction handling
- extract automatic captions
- extract uploader metadata
- extract VLive HLS formats

* [naver] improve metadata extraction

* [cloudflarestream] improve extraction

- add support for bytehighway.net domain
- add support for signed URLs
- extract thumbnail

* [cloudflarestream] import embed URL extraction

* [lego] fix extraction and extract subtitle(closes #23687)

* [safari] Fix kaltura session extraction (closes #23679) (#23670)

* [orf:fm4] Fix extraction (#23599)

* [orf:radio] Clean description and improve extraction

* [twitter] add support for promo_video_website cards(closes #23711)

* [vodplatform] add support for embed.kwikmotion.com domain

* [ndr:base:embed] Improve thumbnails extraction (closes #23731)

* [canvas] Add support for new API endpoint and update tests (closes #17680, closes #18629)

* [travis] Add flake8 job (#23720)

* [yourporn] Fix extraction (closes #21645, closes #22255, closes #23459)

* [ChangeLog] Actualize
[ci skip]

* release 2020.01.15

* [soundcloud] Restore previews extraction (closes #23739)

* [orf:tvthek] Improve geo restricted videos detection (closes #23741)

* [zype] improve extraction

- extract subtitles(closes #21258)
- support URLs with alternative keys/tokens(#21258)
- extract more metadata

* [americastestkitchen] fix extraction

* [nbc] add support for nbc multi network URLs(closes #23049)

* [ard] improve extraction(closes #23761)

- simplify extraction
- extract age limit and series
- bypass geo-restriction

* [ivi:compilation] Fix entries extraction (closes #23770)

* [24video] Add support for 24video.vip (closes #23753)

* [businessinsider] Fix jwplatform id extraction (closes #22929) (#22954)

* [ard] add a missing condition

* [azmedien] fix extraction(closes #23783)

* [voicerepublic] fix extraction

* [stretchinternet] fix extraction(closes #4319)

* [youtube] Fix sigfunc name extraction (closes #23819)

* [ChangeLog] Actualize
[ci skip]

* release 2020.01.24

* [soundcloud] imporve private playlist/set tracks extraction

https://github.com/ytdl-org/youtube-dl/issues/3707#issuecomment-577873539

* [svt] fix article extraction(closes #22897)(closes #22919)

* [svt] fix series extraction(closes #22297)

* [viewlift] improve extraction

- fix extraction(closes #23851)
- add add support for authentication
- add support for more domains

* [vimeo] fix album extraction(closes #23864)

* [tva] Relax _VALID_URL (closes #23903)

* [tv5mondeplus] Fix extraction (closes #23907, closes #23911)

* [twitch:stream] Lowercase channel id for stream request (closes #23917)

* [sportdeutschland] Update to new sportdeutschland API

They switched to SSL, but under a different host AND path...
Remove the old test cases because these videos have become unavailable.

* [popcorntimes] Add extractor (closes #23949)

* [thisoldhouse] fix extraction(closes #23951)

* [toggle] Add support for mewatch.sg (closes #23895) (#23930)

* [compat] Introduce compat_realpath (refs #23991)

* [update] Fix updating via symlinks (closes #23991)

* [nytimes] improve format sorting(closes #24010)

* [abc:iview] Support 720p (#22907) (#22921)

* [nova:embed] Fix extraction (closes #23672)

* [nova:embed] Improve (closes #23690)

* [nova] Improve extraction (refs #23690)

* [jpopsuki] Remove extractor (closes #23858)

* [YoutubeDL] Fix playlist entry indexing with --playlist-items (closes #10591, closes #10622)

* [test_YoutubeDL] Fix get_ids

* [test_YoutubeDL] Add tests for #10591 (closes #23873)

* [24video] Add support for porn.24video.net (closes #23779, closes #23784)

* [npr] Add support for streams (closes #24042)

* [ChangeLog] Actualize
[ci skip]

* release 2020.02.16

* [tv2dk:bornholm:play] Fix extraction (#24076)

* [imdb] Fix extraction (closes #23443)

* [wistia] Add support for multiple generic embeds (closes #8347, closes #11385)

* [teachable] Add support for multiple videos per lecture (closes #24101)

* [pornhd] Fix extraction (closes #24128)

* [options] Remove duplicate short option -v for --version (#24162)

* [extractor/common] Convert ISM manifest to unicode before processing on python 2 (#24152)

* [YoutubeDL] Force redirect URL to unicode on python 2

* Remove no longer needed compat_str around geturl

* [youjizz] Fix extraction (closes #24181)

* [test_subtitles] Remove obsolete test

* [zdf:channel] Fix tests

* [zapiks] Fix test

* [xtube] Fix metadata extraction (closes #21073, closes #22455)

* [xtube:user] Fix test

* [telecinco] Fix extraction (refs #24195)

* [telecinco] Add support for article opening videos

* [franceculture] Fix extraction (closes #24204)

* [xhamster] Fix extraction (closes #24205)

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.01

* [vimeo] Fix subtitles URLs (#24209)

* [servus] Add support for new URL schema (closes #23475, closes #23583, closes #24142)

* [youtube:playlist] Fix tests (closes #23872) (#23885)

* [peertube] Improve extraction

* [peertube] Fix issues and improve extraction (closes #23657)

* [pornhub] Improve title extraction (closes #24184)

* [vimeo] fix showcase password protected video extraction(closes #24224)

* [youtube] Fix age-gated videos support without login (closes #24248)

* [youtube] Fix tests

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.06

* [nhk] update API version(closes #24270)

* [youtube] Improve extraction in 429 error conditions (closes #24283)

* [youtube] Improve age-gated videos extraction in 429 error conditions (refs #24283)

* [youtube] Remove outdated code

Additional get_video_info requests don't seem to provide any extra itags any longer

* [README.md] Clarify 429 error

* [pornhub] Add support for pornhubpremium.com (#24288)

* [utils] Add support for cookies with spaces used instead of tabs

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.08

* Revert "[utils] Add support for cookies with spaces used instead of tabs"

According to [1] TABs must be used as separators between fields.
Files produces by some tools with spaces as separators are considered
malformed.

1. https://curl.haxx.se/docs/http-cookies.html

This reverts commit cff99c91d1.

* [utils] Add reference to cookie file format

* Revert "[vimeo] fix showcase password protected video extraction(closes #24224)"

This reverts commit 12ee431676.

* [nhk] Relax _VALID_URL (#24329)

* [nhk] Remove obsolete rtmp formats (closes #24329)

* [nhk] Update m3u8 URL and use native hls (#24329)

* [ndr] Fix extraction (closes #24326)

* [xtube] Fix formats extraction (closes #24348)

* [xtube] Fix typo

* [hellporno] Fix extraction (closes #24399)

* [cbc:watch] Add support for authentication

* [cbc:watch] Fix authenticated device token caching (closes #19160)

* [soundcloud] fix download url extraction(closes #24394)

* [limelight] remove disabled API requests(closes #24255)

* [bilibili] Add support for new URL schema with BV ids (closes #24439, closes #24442)

* [bilibili] Add support for player.bilibili.com (closes #24402)

* [teachable] Extract chapter metadata (closes #24421)

* [generic] Look for teachable embeds before wistia

* [teachable] Update upskillcourses domain

New version does not use teachable platform any longer

* [teachable] Update gns3 domain

* [teachable] Update test

* [ChangeLog] Actualize
[ci skip]

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.24

* [spankwire] Fix extraction (closes #18924, closes #20648)

* [spankwire] Add support for generic embeds (refs #24633)

* [youporn] Add support form generic embeds

* [mofosex] Add support for generic embeds (closes #24633)

* [tele5] Fix extraction (closes #24553)

* [extractor/common] Skip malformed ISM manifest XMLs while extracting ISM formats (#24667)

* [tv4] Fix ISM formats extraction (closes #24667)

* [twitch:clips] Extend _VALID_URL (closes #24290) (#24642)

* [motherless] Fix extraction (closes #24699)

* [nova:embed] Fix extraction (closes #24700)

* [youtube] Skip broken multifeed videos (closes #24711)

* [soundcloud] Extract AAC format

* [soundcloud] Improve AAC format extraction (closes #19173, closes #24708)

* [thisoldhouse] Fix video id extraction (closes #24548)

Added support for:
with of without "www."
and either  ".chorus.build" or ".com"

It now validated correctly on older URL's
```
<iframe src="https://thisoldhouse.chorus.build/videos/zype/5e33baec27d2e50001d5f52f
```
and newer ones
```
<iframe src="https://www.thisoldhouse.com/videos/zype/5e2b70e95216cc0001615120
```

* [thisoldhouse] Improve video id extraction (closes #24549)

* [youtube] Fix DRM videos detection (refs #24736)

* [options] Clarify doc on --exec command (closes #19087) (#24883)

* [prosiebensat1] Improve extraction and remove 7tv.de support (#24948)

* [prosiebensat1] Extract series metadata

* [tenplay] Relax _VALID_URL (closes #25001)

* [tvplay] fix Viafree extraction(closes #15189)(closes #24473)(closes #24789)

* [yahoo] fix GYAO Player extraction and relax title URL regex(closes #24178)(closes #24778)

* [youtube] Use redirected video id if any (closes #25063)

* [youtube] Improve player id extraction and add tests

* [extractor/common] Extract multiple JSON-LD entries

* [crunchyroll] Fix and improve extraction (closes #25096, closes #25060)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.03

* [puhutv] Remove no longer available HTTP formats (closes #25124)

* [utils] Improve cookie files support

+ Add support for UTF-8 in cookie files
* Skip malformed cookie file entries instead of crashing (invalid entry len, invalid expires at)

* [dailymotion] Fix typo

* [compat] Introduce compat_cookiejar_Cookie

* [extractor/common] Use compat_cookiejar_Cookie for _set_cookie (closes #23256, closes #24776)

To always ensure cookie name and value are bytestrings on python 2.

* [orf] Add support for more radio stations (closes #24938) (#24968)

* [uol] fix extraction(closes #22007)

* [downloader/http] Finish downloading once received data length matches expected

Always do this if possible, i.e. if Content-Length or expected length is known, not only in test.
This will save unnecessary last extra loop trying to read 0 bytes.

* [downloader/http] Request last data block of exact remaining size

Always request last data block of exact size remaining to download if possible not the current block size.

* [iprima] Improve extraction (closes #25138)

* [youtube] Improve signature cipher extraction (closes #25188)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.08

* [spike] fix Bellator mgid extraction(closes #25195)

* [bbccouk] PEP8

* [mailru] Fix extraction (closes #24530) (#25239)

* [README.md] flake8 HTTPS URL (#25230)

* [youtube] Add support for yewtu.be (#25226)

* [soundcloud] reduce API playlist page limit(closes #25274)

* [vimeo] improve format extraction and sorting(closes #25285)

* [redtube] Improve title extraction (#25208)

* [indavideo] Switch to HTTPS for API request (#25191)

* [utils] Fix file permissions in write_json_file (closes #12471) (#25122)

* [redtube] Improve formats extraction and extract m3u8 formats (closes #25311, closes #25321)

* [ard] Improve _VALID_URL (closes #25134) (#25198)

* [giantbomb] Extend _VALID_URL (#25222)

* [postprocessor/ffmpeg] Embed series metadata with --add-metadata

* [youtube] Add support for more invidious instances (#25417)

* [ard:beta] Extend _VALID_URL (closes #25405)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.29

* [jwplatform] Improve embeds extraction (closes #25467)

* [periscope] Fix untitled broadcasts (#25482)

* [twitter:broadcast] Add untitled periscope broadcast test

* [malltv] Add support for sk.mall.tv (#25445)

* [brightcove] Fix subtitles extraction (closes #25540)

* [brightcove] Sort imports

* [twitch] Pass v5 accept header and fix thumbnails extraction (closes #25531)

* [twitch:stream] Fix extraction (closes #25528)

* [twitch:stream] Expect 400 and 410 HTTP errors from API

* [tele5] Prefer jwplatform over nexx (closes #25533)

* [jwplatform] Add support for bypass geo restriction

* [tele5] Bypass geo restriction

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.06

* [kaltura] Add support for multiple embeds on a webpage (closes #25523)

* [youtube] Extract chapters from JSON (closes #24819)

* [facebook] Support single-video ID links

I stumbled upon this at https://www.facebook.com/bwfbadminton/posts/10157127020046316 . No idea how prevalent it is yet.

* [youtube] Fix playlist and feed extraction (closes #25675)

* [youtube] Fix thumbnails extraction and remove uploader id extraction warning (closes #25676)

* [youtube] Fix upload date extraction

* [youtube] Improve view count extraction

* [youtube] Fix uploader id and uploader URL extraction

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.16

* [youtube] Fix categories and improve tags extraction

* [youtube] Force old layout (closes #25682, closes #25683, closes #25680, closes #25686)

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.16.1

* [brightcove] Improve embed detection (closes #25674)

* [bellmedia] add support for cp24.com clip URLs(closes #25764)

* [youtube:playlists] Extend _VALID_URL (closes #25810)

* [youtube] Prevent excess HTTP 301 (#25786)

* [wistia] Restrict embed regex (closes #25969)

* [youtube] Improve description extraction (closes #25937) (#25980)

* [youtube] Fix sigfunc name extraction (closes #26134, closes #26135, closes #26136, closes #26137)

* [ChangeLog] Actualize
[ci skip]

* release 2020.07.28

* [xhamster] Extend _VALID_URL (closes #25789) (#25804)

* [xhamster] Fix extraction (closes #26157) (#26254)

* [xhamster] Extend _VALID_URL (closes #25927)

Co-authored-by: Remita Amine <remitamine@gmail.com>
Co-authored-by: Sergey M․ <dstftw@gmail.com>
Co-authored-by: nmeum <soeren+github@soeren-tempel.net>
Co-authored-by: Roxedus <me@roxedus.dev>
Co-authored-by: Singwai Chan <c.singwai@gmail.com>
Co-authored-by: cdarlint <cdarlint@users.noreply.github.com>
Co-authored-by: Johannes N <31795504+jonolt@users.noreply.github.com>
Co-authored-by: jnozsc <jnozsc@gmail.com>
Co-authored-by: Moritz Patelscheck <moritz.patelscheck@campus.tu-berlin.de>
Co-authored-by: PB <3854688+uno20001@users.noreply.github.com>
Co-authored-by: Philipp Hagemeister <phihag@phihag.de>
Co-authored-by: Xaver Hellauer <software@hellauer.bayern>
Co-authored-by: d2au <d2au.dev@gmail.com>
Co-authored-by: Jan 'Yenda' Trmal <jtrmal@gmail.com>
Co-authored-by: jxu <7989982+jxu@users.noreply.github.com>
Co-authored-by: Martin Ström <name@my-domain.se>
Co-authored-by: The Hatsune Daishi <nao20010128@gmail.com>
Co-authored-by: tsia <github@tsia.de>
Co-authored-by: 3risian <59593325+3risian@users.noreply.github.com>
Co-authored-by: Tristan Waddington <tristan.waddington@gmail.com>
Co-authored-by: Devon Meunier <devon.meunier@gmail.com>
Co-authored-by: Felix Stupp <felix.stupp@outlook.com>
Co-authored-by: tom <tomster954@gmail.com>
Co-authored-by: AndrewMBL <62922222+AndrewMBL@users.noreply.github.com>
Co-authored-by: willbeaufoy <will@willbeaufoy.net>
Co-authored-by: Philipp Stehle <anderschwiedu@googlemail.com>
Co-authored-by: hh0rva1h <61889859+hh0rva1h@users.noreply.github.com>
Co-authored-by: comsomisha <shmelev1996@mail.ru>
Co-authored-by: TotalCaesar659 <14265316+TotalCaesar659@users.noreply.github.com>
Co-authored-by: Juan Francisco Cantero Hurtado <iam@juanfra.info>
Co-authored-by: Dave Loyall <dave@the-good-guys.net>
Co-authored-by: tlsssl <63866177+tlsssl@users.noreply.github.com>
Co-authored-by: Rob <ankenyr@gmail.com>
Co-authored-by: Michael Klein <github@a98shuttle.de>
Co-authored-by: JordanWeatherby <47519158+JordanWeatherby@users.noreply.github.com>
Co-authored-by: striker.sh <19488257+strikersh@users.noreply.github.com>
Co-authored-by: Matej Dujava <mdujava@gmail.com>
Co-authored-by: Glenn Slayden <5589855+glenn-slayden@users.noreply.github.com>
Co-authored-by: MRWITEK <mrvvitek@gmail.com>
Co-authored-by: JChris246 <43832407+JChris246@users.noreply.github.com>
Co-authored-by: TheRealDude2 <the.real.dude@gmx.de>
2020-08-25 20:23:34 +05:30

687 lines
28 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# coding: utf-8
from __future__ import unicode_literals
import re
import json
import zlib
from hashlib import sha1
from math import pow, sqrt, floor
from .common import InfoExtractor
from .vrv import VRVIE
from ..compat import (
compat_b64decode,
compat_etree_Element,
compat_etree_fromstring,
compat_str,
compat_urllib_parse_urlencode,
compat_urllib_request,
compat_urlparse,
)
from ..utils import (
ExtractorError,
bytes_to_intlist,
extract_attributes,
float_or_none,
intlist_to_bytes,
int_or_none,
lowercase_escape,
merge_dicts,
remove_end,
sanitized_Request,
urlencode_postdata,
xpath_text,
)
from ..aes import (
aes_cbc_decrypt,
)
class CrunchyrollBaseIE(InfoExtractor):
_LOGIN_URL = 'https://www.crunchyroll.com/login'
_LOGIN_FORM = 'login_form'
_NETRC_MACHINE = 'crunchyroll'
def _call_rpc_api(self, method, video_id, note=None, data=None):
data = data or {}
data['req'] = 'RpcApi' + method
data = compat_urllib_parse_urlencode(data).encode('utf-8')
return self._download_xml(
'https://www.crunchyroll.com/xml/',
video_id, note, fatal=False, data=data, headers={
'Content-Type': 'application/x-www-form-urlencoded',
})
def _login(self):
username, password = self._get_login_info()
if username is None:
return
login_page = self._download_webpage(
self._LOGIN_URL, None, 'Downloading login page')
def is_logged(webpage):
return 'href="/logout"' in webpage
# Already logged in
if is_logged(login_page):
return
login_form_str = self._search_regex(
r'(?P<form><form[^>]+?id=(["\'])%s\2[^>]*>)' % self._LOGIN_FORM,
login_page, 'login form', group='form')
post_url = extract_attributes(login_form_str).get('action')
if not post_url:
post_url = self._LOGIN_URL
elif not post_url.startswith('http'):
post_url = compat_urlparse.urljoin(self._LOGIN_URL, post_url)
login_form = self._form_hidden_inputs(self._LOGIN_FORM, login_page)
login_form.update({
'login_form[name]': username,
'login_form[password]': password,
})
response = self._download_webpage(
post_url, None, 'Logging in', 'Wrong login info',
data=urlencode_postdata(login_form),
headers={'Content-Type': 'application/x-www-form-urlencoded'})
# Successful login
if is_logged(response):
return
error = self._html_search_regex(
'(?s)<ul[^>]+class=["\']messages["\'][^>]*>(.+?)</ul>',
response, 'error message', default=None)
if error:
raise ExtractorError('Unable to login: %s' % error, expected=True)
raise ExtractorError('Unable to log in')
def _real_initialize(self):
self._login()
@staticmethod
def _add_skip_wall(url):
parsed_url = compat_urlparse.urlparse(url)
qs = compat_urlparse.parse_qs(parsed_url.query)
# Always force skip_wall to bypass maturity wall, namely 18+ confirmation message:
# > This content may be inappropriate for some people.
# > Are you sure you want to continue?
# since it's not disabled by default in crunchyroll account's settings.
# See https://github.com/ytdl-org/youtube-dl/issues/7202.
qs['skip_wall'] = ['1']
return compat_urlparse.urlunparse(
parsed_url._replace(query=compat_urllib_parse_urlencode(qs, True)))
class CrunchyrollIE(CrunchyrollBaseIE, VRVIE):
IE_NAME = 'crunchyroll'
_VALID_URL = r'https?://(?:(?P<prefix>www|m)\.)?(?P<url>crunchyroll\.(?:com|fr)/(?:media(?:-|/\?id=)|(?:[^/]*/){1,2}[^/?&]*?)(?P<video_id>[0-9]+))(?:[/?&]|$)'
_TESTS = [{
'url': 'http://www.crunchyroll.com/wanna-be-the-strongest-in-the-world/episode-1-an-idol-wrestler-is-born-645513',
'info_dict': {
'id': '645513',
'ext': 'mp4',
'title': 'Wanna be the Strongest in the World Episode 1 An Idol-Wrestler is Born!',
'description': 'md5:2d17137920c64f2f49981a7797d275ef',
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'Yomiuri Telecasting Corporation (YTV)',
'upload_date': '20131013',
'url': 're:(?!.*&amp)',
},
'params': {
# rtmp
'skip_download': True,
},
'skip': 'Video gone',
}, {
'url': 'http://www.crunchyroll.com/media-589804/culture-japan-1',
'info_dict': {
'id': '589804',
'ext': 'flv',
'title': 'Culture Japan Episode 1 Rebuilding Japan after the 3.11',
'description': 'md5:2fbc01f90b87e8e9137296f37b461c12',
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'Danny Choo Network',
'upload_date': '20120213',
},
'params': {
# rtmp
'skip_download': True,
},
'skip': 'Video gone',
}, {
'url': 'http://www.crunchyroll.com/rezero-starting-life-in-another-world-/episode-5-the-morning-of-our-promise-is-still-distant-702409',
'info_dict': {
'id': '702409',
'ext': 'mp4',
'title': compat_str,
'description': compat_str,
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'Re:Zero Partners',
'timestamp': 1462098900,
'upload_date': '20160501',
},
'params': {
# m3u8 download
'skip_download': True,
},
}, {
'url': 'http://www.crunchyroll.com/konosuba-gods-blessing-on-this-wonderful-world/episode-1-give-me-deliverance-from-this-judicial-injustice-727589',
'info_dict': {
'id': '727589',
'ext': 'mp4',
'title': compat_str,
'description': compat_str,
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'Kadokawa Pictures Inc.',
'timestamp': 1484130900,
'upload_date': '20170111',
'series': compat_str,
'season': "KONOSUBA -God's blessing on this wonderful world! 2",
'season_number': 2,
'episode': 'Give Me Deliverance From This Judicial Injustice!',
'episode_number': 1,
},
'params': {
# m3u8 download
'skip_download': True,
},
}, {
'url': 'http://www.crunchyroll.fr/girl-friend-beta/episode-11-goodbye-la-mode-661697',
'only_matching': True,
}, {
# geo-restricted (US), 18+ maturity wall, non-premium available
'url': 'http://www.crunchyroll.com/cosplay-complex-ova/episode-1-the-birth-of-the-cosplay-club-565617',
'only_matching': True,
}, {
# A description with double quotes
'url': 'http://www.crunchyroll.com/11eyes/episode-1-piros-jszaka-red-night-535080',
'info_dict': {
'id': '535080',
'ext': 'mp4',
'title': compat_str,
'description': compat_str,
'uploader': 'Marvelous AQL Inc.',
'timestamp': 1255512600,
'upload_date': '20091014',
},
'params': {
# Just test metadata extraction
'skip_download': True,
},
}, {
# make sure we can extract an uploader name that's not a link
'url': 'http://www.crunchyroll.com/hakuoki-reimeiroku/episode-1-dawn-of-the-divine-warriors-606899',
'info_dict': {
'id': '606899',
'ext': 'mp4',
'title': 'Hakuoki Reimeiroku Episode 1 Dawn of the Divine Warriors',
'description': 'Ryunosuke was left to die, but Serizawa-san asked him a simple question "Do you want to live?"',
'uploader': 'Geneon Entertainment',
'upload_date': '20120717',
},
'params': {
# just test metadata extraction
'skip_download': True,
},
'skip': 'Video gone',
}, {
# A video with a vastly different season name compared to the series name
'url': 'http://www.crunchyroll.com/nyarko-san-another-crawling-chaos/episode-1-test-590532',
'info_dict': {
'id': '590532',
'ext': 'mp4',
'title': compat_str,
'description': compat_str,
'uploader': 'TV TOKYO',
'timestamp': 1330956000,
'upload_date': '20120305',
'series': 'Nyarko-san: Another Crawling Chaos',
'season': 'Haiyoru! Nyaruani (ONA)',
},
'params': {
# Just test metadata extraction
'skip_download': True,
},
}, {
'url': 'http://www.crunchyroll.com/media-723735',
'only_matching': True,
}, {
'url': 'https://www.crunchyroll.com/en-gb/mob-psycho-100/episode-2-urban-legends-encountering-rumors-780921',
'only_matching': True,
}]
_FORMAT_IDS = {
'360': ('60', '106'),
'480': ('61', '106'),
'720': ('62', '106'),
'1080': ('80', '108'),
}
def _download_webpage(self, url_or_request, *args, **kwargs):
request = (url_or_request if isinstance(url_or_request, compat_urllib_request.Request)
else sanitized_Request(url_or_request))
# Accept-Language must be set explicitly to accept any language to avoid issues
# similar to https://github.com/ytdl-org/youtube-dl/issues/6797.
# Along with IP address Crunchyroll uses Accept-Language to guess whether georestriction
# should be imposed or not (from what I can see it just takes the first language
# ignoring the priority and requires it to correspond the IP). By the way this causes
# Crunchyroll to not work in georestriction cases in some browsers that don't place
# the locale lang first in header. However allowing any language seems to workaround the issue.
request.add_header('Accept-Language', '*')
return super(CrunchyrollBaseIE, self)._download_webpage(request, *args, **kwargs)
def _decrypt_subtitles(self, data, iv, id):
data = bytes_to_intlist(compat_b64decode(data))
iv = bytes_to_intlist(compat_b64decode(iv))
id = int(id)
def obfuscate_key_aux(count, modulo, start):
output = list(start)
for _ in range(count):
output.append(output[-1] + output[-2])
# cut off start values
output = output[2:]
output = list(map(lambda x: x % modulo + 33, output))
return output
def obfuscate_key(key):
num1 = int(floor(pow(2, 25) * sqrt(6.9)))
num2 = (num1 ^ key) << 5
num3 = key ^ num1
num4 = num3 ^ (num3 >> 3) ^ num2
prefix = intlist_to_bytes(obfuscate_key_aux(20, 97, (1, 2)))
shaHash = bytes_to_intlist(sha1(prefix + str(num4).encode('ascii')).digest())
# Extend 160 Bit hash to 256 Bit
return shaHash + [0] * 12
key = obfuscate_key(id)
decrypted_data = intlist_to_bytes(aes_cbc_decrypt(data, key, iv))
return zlib.decompress(decrypted_data)
def _convert_subtitles_to_srt(self, sub_root):
output = ''
for i, event in enumerate(sub_root.findall('./events/event'), 1):
start = event.attrib['start'].replace('.', ',')
end = event.attrib['end'].replace('.', ',')
text = event.attrib['text'].replace('\\N', '\n')
output += '%d\n%s --> %s\n%s\n\n' % (i, start, end, text)
return output
def _convert_subtitles_to_ass(self, sub_root):
output = ''
def ass_bool(strvalue):
assvalue = '0'
if strvalue == '1':
assvalue = '-1'
return assvalue
output = '[Script Info]\n'
output += 'Title: %s\n' % sub_root.attrib['title']
output += 'ScriptType: v4.00+\n'
output += 'WrapStyle: %s\n' % sub_root.attrib['wrap_style']
output += 'PlayResX: %s\n' % sub_root.attrib['play_res_x']
output += 'PlayResY: %s\n' % sub_root.attrib['play_res_y']
output += """
[V4+ Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
"""
for style in sub_root.findall('./styles/style'):
output += 'Style: ' + style.attrib['name']
output += ',' + style.attrib['font_name']
output += ',' + style.attrib['font_size']
output += ',' + style.attrib['primary_colour']
output += ',' + style.attrib['secondary_colour']
output += ',' + style.attrib['outline_colour']
output += ',' + style.attrib['back_colour']
output += ',' + ass_bool(style.attrib['bold'])
output += ',' + ass_bool(style.attrib['italic'])
output += ',' + ass_bool(style.attrib['underline'])
output += ',' + ass_bool(style.attrib['strikeout'])
output += ',' + style.attrib['scale_x']
output += ',' + style.attrib['scale_y']
output += ',' + style.attrib['spacing']
output += ',' + style.attrib['angle']
output += ',' + style.attrib['border_style']
output += ',' + style.attrib['outline']
output += ',' + style.attrib['shadow']
output += ',' + style.attrib['alignment']
output += ',' + style.attrib['margin_l']
output += ',' + style.attrib['margin_r']
output += ',' + style.attrib['margin_v']
output += ',' + style.attrib['encoding']
output += '\n'
output += """
[Events]
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
"""
for event in sub_root.findall('./events/event'):
output += 'Dialogue: 0'
output += ',' + event.attrib['start']
output += ',' + event.attrib['end']
output += ',' + event.attrib['style']
output += ',' + event.attrib['name']
output += ',' + event.attrib['margin_l']
output += ',' + event.attrib['margin_r']
output += ',' + event.attrib['margin_v']
output += ',' + event.attrib['effect']
output += ',' + event.attrib['text']
output += '\n'
return output
def _extract_subtitles(self, subtitle):
sub_root = compat_etree_fromstring(subtitle)
return [{
'ext': 'srt',
'data': self._convert_subtitles_to_srt(sub_root),
}, {
'ext': 'ass',
'data': self._convert_subtitles_to_ass(sub_root),
}]
def _get_subtitles(self, video_id, webpage):
subtitles = {}
for sub_id, sub_name in re.findall(r'\bssid=([0-9]+)"[^>]+?\btitle="([^"]+)', webpage):
sub_doc = self._call_rpc_api(
'Subtitle_GetXml', video_id,
'Downloading subtitles for ' + sub_name, data={
'subtitle_script_id': sub_id,
})
if not isinstance(sub_doc, compat_etree_Element):
continue
sid = sub_doc.get('id')
iv = xpath_text(sub_doc, 'iv', 'subtitle iv')
data = xpath_text(sub_doc, 'data', 'subtitle data')
if not sid or not iv or not data:
continue
subtitle = self._decrypt_subtitles(data, iv, sid).decode('utf-8')
lang_code = self._search_regex(r'lang_code=["\']([^"\']+)', subtitle, 'subtitle_lang_code', fatal=False)
if not lang_code:
continue
subtitles[lang_code] = self._extract_subtitles(subtitle)
return subtitles
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('video_id')
if mobj.group('prefix') == 'm':
mobile_webpage = self._download_webpage(url, video_id, 'Downloading mobile webpage')
webpage_url = self._search_regex(r'<link rel="canonical" href="([^"]+)" />', mobile_webpage, 'webpage_url')
else:
webpage_url = 'http://www.' + mobj.group('url')
webpage = self._download_webpage(
self._add_skip_wall(webpage_url), video_id,
headers=self.geo_verification_headers())
note_m = self._html_search_regex(
r'<div class="showmedia-trailer-notice">(.+?)</div>',
webpage, 'trailer-notice', default='')
if note_m:
raise ExtractorError(note_m)
mobj = re.search(r'Page\.messaging_box_controller\.addItems\(\[(?P<msg>{.+?})\]\)', webpage)
if mobj:
msg = json.loads(mobj.group('msg'))
if msg.get('type') == 'error':
raise ExtractorError('crunchyroll returned error: %s' % msg['message_body'], expected=True)
if 'To view this, please log in to verify you are 18 or older.' in webpage:
self.raise_login_required()
media = self._parse_json(self._search_regex(
r'vilos\.config\.media\s*=\s*({.+?});',
webpage, 'vilos media', default='{}'), video_id)
media_metadata = media.get('metadata') or {}
language = self._search_regex(
r'(?:vilos\.config\.player\.language|LOCALE)\s*=\s*(["\'])(?P<lang>(?:(?!\1).)+)\1',
webpage, 'language', default=None, group='lang')
video_title = self._html_search_regex(
(r'(?s)<h1[^>]*>((?:(?!<h1).)*?<(?:span[^>]+itemprop=["\']title["\']|meta[^>]+itemprop=["\']position["\'])[^>]*>(?:(?!<h1).)+?)</h1>',
r'<title>(.+?),\s+-\s+.+? Crunchyroll'),
webpage, 'video_title', default=None)
if not video_title:
video_title = re.sub(r'^Watch\s+', '', self._og_search_description(webpage))
video_title = re.sub(r' {2,}', ' ', video_title)
video_description = (self._parse_json(self._html_search_regex(
r'<script[^>]*>\s*.+?\[media_id=%s\].+?({.+?"description"\s*:.+?})\);' % video_id,
webpage, 'description', default='{}'), video_id) or media_metadata).get('description')
if video_description:
video_description = lowercase_escape(video_description.replace(r'\r\n', '\n'))
video_uploader = self._html_search_regex(
# try looking for both an uploader that's a link and one that's not
[r'<a[^>]+href="/publisher/[^"]+"[^>]*>([^<]+)</a>', r'<div>\s*Publisher:\s*<span>\s*(.+?)\s*</span>\s*</div>'],
webpage, 'video_uploader', default=False)
formats = []
for stream in media.get('streams', []):
audio_lang = stream.get('audio_lang')
hardsub_lang = stream.get('hardsub_lang')
vrv_formats = self._extract_vrv_formats(
stream.get('url'), video_id, stream.get('format'),
audio_lang, hardsub_lang)
for f in vrv_formats:
if not hardsub_lang:
f['preference'] = 1
language_preference = 0
if audio_lang == language:
language_preference += 1
if hardsub_lang == language:
language_preference += 1
if language_preference:
f['language_preference'] = language_preference
formats.extend(vrv_formats)
if not formats:
available_fmts = []
for a, fmt in re.findall(r'(<a[^>]+token=["\']showmedia\.([0-9]{3,4})p["\'][^>]+>)', webpage):
attrs = extract_attributes(a)
href = attrs.get('href')
if href and '/freetrial' in href:
continue
available_fmts.append(fmt)
if not available_fmts:
for p in (r'token=["\']showmedia\.([0-9]{3,4})p"', r'showmedia\.([0-9]{3,4})p'):
available_fmts = re.findall(p, webpage)
if available_fmts:
break
if not available_fmts:
available_fmts = self._FORMAT_IDS.keys()
video_encode_ids = []
for fmt in available_fmts:
stream_quality, stream_format = self._FORMAT_IDS[fmt]
video_format = fmt + 'p'
stream_infos = []
streamdata = self._call_rpc_api(
'VideoPlayer_GetStandardConfig', video_id,
'Downloading media info for %s' % video_format, data={
'media_id': video_id,
'video_format': stream_format,
'video_quality': stream_quality,
'current_page': url,
})
if isinstance(streamdata, compat_etree_Element):
stream_info = streamdata.find('./{default}preload/stream_info')
if stream_info is not None:
stream_infos.append(stream_info)
stream_info = self._call_rpc_api(
'VideoEncode_GetStreamInfo', video_id,
'Downloading stream info for %s' % video_format, data={
'media_id': video_id,
'video_format': stream_format,
'video_encode_quality': stream_quality,
})
if isinstance(stream_info, compat_etree_Element):
stream_infos.append(stream_info)
for stream_info in stream_infos:
video_encode_id = xpath_text(stream_info, './video_encode_id')
if video_encode_id in video_encode_ids:
continue
video_encode_ids.append(video_encode_id)
video_file = xpath_text(stream_info, './file')
if not video_file:
continue
if video_file.startswith('http'):
formats.extend(self._extract_m3u8_formats(
video_file, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
continue
video_url = xpath_text(stream_info, './host')
if not video_url:
continue
metadata = stream_info.find('./metadata')
format_info = {
'format': video_format,
'height': int_or_none(xpath_text(metadata, './height')),
'width': int_or_none(xpath_text(metadata, './width')),
}
if '.fplive.net/' in video_url:
video_url = re.sub(r'^rtmpe?://', 'http://', video_url.strip())
parsed_video_url = compat_urlparse.urlparse(video_url)
direct_video_url = compat_urlparse.urlunparse(parsed_video_url._replace(
netloc='v.lvlt.crcdn.net',
path='%s/%s' % (remove_end(parsed_video_url.path, '/'), video_file.split(':')[-1])))
if self._is_valid_url(direct_video_url, video_id, video_format):
format_info.update({
'format_id': 'http-' + video_format,
'url': direct_video_url,
})
formats.append(format_info)
continue
format_info.update({
'format_id': 'rtmp-' + video_format,
'url': video_url,
'play_path': video_file,
'ext': 'flv',
})
formats.append(format_info)
self._sort_formats(formats, ('preference', 'language_preference', 'height', 'width', 'tbr', 'fps'))
metadata = self._call_rpc_api(
'VideoPlayer_GetMediaMetadata', video_id,
note='Downloading media info', data={
'media_id': video_id,
})
subtitles = {}
for subtitle in media.get('subtitles', []):
subtitle_url = subtitle.get('url')
if not subtitle_url:
continue
subtitles.setdefault(subtitle.get('language', 'enUS'), []).append({
'url': subtitle_url,
'ext': subtitle.get('format', 'ass'),
})
if not subtitles:
subtitles = self.extract_subtitles(video_id, webpage)
# webpage provide more accurate data than series_title from XML
series = self._html_search_regex(
r'(?s)<h\d[^>]+\bid=["\']showmedia_about_episode_num[^>]+>(.+?)</h\d',
webpage, 'series', fatal=False)
season = episode = episode_number = duration = thumbnail = None
if isinstance(metadata, compat_etree_Element):
season = xpath_text(metadata, 'series_title')
episode = xpath_text(metadata, 'episode_title')
episode_number = int_or_none(xpath_text(metadata, 'episode_number'))
duration = float_or_none(media_metadata.get('duration'), 1000)
thumbnail = xpath_text(metadata, 'episode_image_url')
if not episode:
episode = media_metadata.get('title')
if not episode_number:
episode_number = int_or_none(media_metadata.get('episode_number'))
if not thumbnail:
thumbnail = media_metadata.get('thumbnail', {}).get('url')
season_number = int_or_none(self._search_regex(
r'(?s)<h\d[^>]+id=["\']showmedia_about_episode_num[^>]+>.+?</h\d>\s*<h4>\s*Season (\d+)',
webpage, 'season number', default=None))
info = self._search_json_ld(webpage, video_id, default={})
return merge_dicts({
'id': video_id,
'title': video_title,
'description': video_description,
'duration': duration,
'thumbnail': thumbnail,
'uploader': video_uploader,
'series': series,
'season': season,
'season_number': season_number,
'episode': episode,
'episode_number': episode_number,
'subtitles': subtitles,
'formats': formats,
}, info)
class CrunchyrollShowPlaylistIE(CrunchyrollBaseIE):
IE_NAME = 'crunchyroll:playlist'
_VALID_URL = r'https?://(?:(?P<prefix>www|m)\.)?(?P<url>crunchyroll\.com/(?!(?:news|anime-news|library|forum|launchcalendar|lineup|store|comics|freetrial|login|media-\d+))(?P<id>[\w\-]+))/?(?:\?|$)'
_TESTS = [{
'url': 'http://www.crunchyroll.com/a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi',
'info_dict': {
'id': 'a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi',
'title': 'A Bridge to the Starry Skies - Hoshizora e Kakaru Hashi'
},
'playlist_count': 13,
}, {
# geo-restricted (US), 18+ maturity wall, non-premium available
'url': 'http://www.crunchyroll.com/cosplay-complex-ova',
'info_dict': {
'id': 'cosplay-complex-ova',
'title': 'Cosplay Complex OVA'
},
'playlist_count': 3,
'skip': 'Georestricted',
}, {
# geo-restricted (US), 18+ maturity wall, non-premium will be available since 2015.11.14
'url': 'http://www.crunchyroll.com/ladies-versus-butlers?skip_wall=1',
'only_matching': True,
}]
def _real_extract(self, url):
show_id = self._match_id(url)
webpage = self._download_webpage(
self._add_skip_wall(url), show_id,
headers=self.geo_verification_headers())
title = self._html_search_meta('name', webpage, default=None)
episode_paths = re.findall(
r'(?s)<li id="showview_videos_media_(\d+)"[^>]+>.*?<a href="([^"]+)"',
webpage)
entries = [
self.url_result('http://www.crunchyroll.com' + ep, 'Crunchyroll', ep_id)
for ep_id, ep in episode_paths
]
entries.reverse()
return {
'_type': 'playlist',
'id': show_id,
'title': title,
'entries': entries,
}