diff --git a/README.md b/README.md
index b0634121..23f700f0 100644
--- a/README.md
+++ b/README.md
@@ -22,7 +22,7 @@
-A cli to browse and watch anime (alone AND with friends). This tool scrapes the site allanime.
+A cli to browse and watch anime (alone AND with friends). This tool scrapes the site allmanga.
@@ -562,7 +562,7 @@ Ani-skip uses the external lua script function of mpv and as such – for now
## Homies
-* [animdl](https://github.com/justfoolingaround/animdl): Ridiculously efficient, fast and light-weight (supports most sources: allanime, zoro ... (Python)
+* [animdl](https://github.com/justfoolingaround/animdl): Ridiculously efficient, fast and light-weight (supports most sources: allmanga, zoro ... (Python)
* [jerry](https://github.com/justchokingaround/jerry): stream anime with anilist tracking and syncing, with discord presence (Shell)
* [anipy-cli](https://github.com/sdaqo/anipy-cli): ani-cli rewritten in python (Python)
* [Dantotsu](https://github.com/rebelonion/Dantotsu): Rebirth of Saikou, Best android app for anime/manga/LN with anilist integration (Kotlin)
diff --git a/ani-cli b/ani-cli
index aa2b38ea..4d59379c 100755
--- a/ani-cli
+++ b/ani-cli
@@ -155,7 +155,7 @@ get_links() {
[ -z "$ANI_CLI_NON_INTERACTIVE" ] && printf "\033[1;32m%s\033[0m Links Fetched\n" "$provider_name" 1>&2
}
-# innitialises provider_name and provider_id. First argument is the provider name, 2nd is the regex that matches that provider's link
+# initialises provider_name and provider_id. First argument is the provider name, 2nd is the regex that matches that provider's link
provider_init() {
provider_name=$1
provider_id=$(printf "%s" "$resp" | sed -n "$2" | head -1 | cut -d':' -f2 | sed 's/../&\
@@ -332,7 +332,7 @@ play() {
else
play_episode
fi
- # moves upto stored positon and deletes to end
+ # moves up to stored position and deletes to end
[ "$player_function" != "debug" ] && [ "$player_function" != "download" ] && tput rc && tput ed
}
@@ -496,7 +496,7 @@ esac
# moves the cursor up one line and clears that line
tput cuu1 && tput el
-# stores the positon of cursor
+# stores the position of cursor
tput sc
# playback & loop
diff --git a/hacking.md b/hacking.md
index f361a4dc..e291a2e4 100644
--- a/hacking.md
+++ b/hacking.md
@@ -3,7 +3,7 @@ Ani-cli is set up to scrape one platform - currently allanime. Supporting multip
However ani-cli being open-source and the pirate anime streaming sites being so similar you can hack ani-cli to support any site that follows a few conventions.
-## Prequisites
+## Prerequisites
Here's the of skills you'll need and the guide will take for granted:
- basic shell scripting
- understanding of http(s) requests and proficiency with curl
@@ -33,7 +33,7 @@ An adblocker can help with reducing traffic from the site, but beware of extensi
Once you have the pages (urls) that you're interested in, it's easier to inspect them from less/an editor.
The debugger's inspector can help you with finding what's what but finding patterns/urls is much easier in an editor.
-Additionally the debugger doesn't always show you the html faithfully - I've experineced some escape sequences being rendered, capitalization changing - so be sure you see the response of the servers in raw format before you write your regexes.
+Additionally the debugger doesn't always show you the html faithfully - I've experienced some escape sequences being rendered, capitalization changing - so be sure you see the response of the servers in raw format before you write your regexes.
### Core concepts
If you navigate the site normally from the browser, you'll see that each anime is represented with an URL that compromises from an ID (that identifies a series/season of series) and an episode number.
@@ -50,7 +50,7 @@ Just try searching for a few series and see how the URL changes (most of the tim
If the site uses a POST request or a more roundabout way, use the debugger to analyze the traffic.
Once you figured out how searching works, you'll have to replicate it in the `search_anime` function.
-The `curl` in this function is responsible for the search request, and the following `sed` regexes mold the respons into many lines of `id\ttitle` format.
+The `curl` in this function is responsible for the search request, and the following `sed` regexes mold the response into many lines of `id\ttitle` format.
The reason for this is the `nth` function, see it for more details.
You'll have to change some variables in the process (eg. allanime_base) too.
@@ -83,7 +83,7 @@ From here they are separated and parsed by `provider_init` and the first half on
Some sites (like allanime) have these urls not in plaintext but "encrypted". The decrypt allanime function does this post-processing, it might need to be changed or discarded completely.
If there's only one embed source, the `generate links..` block can be reduced to a single call to `generate_link`.
-The current structure does the agregation of many providers asynchronously, but this is not needed if there's only one source.
+The current structure does the aggregation of many providers asynchronously, but this is not needed if there's only one source.
### Extracting the media links