profile
viewpoint
anarcat anarcat Montréal, Québec, planet Ocean https://anarc.at/

anarcat/bup-cron 22

mirror of the bup-cron repository, may be out of date while i figure out github mirror things

anarcat/ArchiveBot 1

ArchiveBot, an IRC bot for archiving websites

anarcat/community 1

my simple borg cron wrapper

anarcat/darkroom 1

Simple distraction-free editing

anarcat/alabaster 0

Lightweight, configurable Sphinx theme. Now the Sphinx default!

anarcat/ansible 0

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.

anarcat/atheme-contrib-modules 0

Community-supported modules for Atheme

anarcat/attic 0

Deduplicating backup program

anarcat/babeld 0

The Babel routing daemon

anarcat/battery-stats 0

Log battery charge (batter-stats-collector), show gnuplot graphs (battery-graph)

startedanarcat/terms-benchmarks

started time in 2 days

issue commentlinkchecker/linkchecker

Limit of 100001 for URLs

Hi, I am experiencing the same issue, but a bit more on the PITA side. Once the linkchecker hits the magic number, the memory consumption rises.

image

MircoPerini

comment created time in 4 days

fork Atemu/terms-benchmarks

Reproducible results for LWN review of terminal emulators: https://lwn.net/Articles/749992/ https://lwn.net/Articles/751763/. Mirror of GitLab repository, possibly out of date..

https://gitlab.com/anarcat/terms-benchmarks/

fork in 8 days

issue closedxorg62/tty-clock

screenshot in the README?

tty-clock looks nice, so could it have a screenshot?

thanks!

closed time in 9 days

novelistparty

PR opened linkchecker/linkchecker

Unhandled LookupError in robotparser2.py

Sites presenting an invalid content-type character set raise a LookupError exception.

For an example, see the single link on this page I've prepared.

+3 -0

0 comment

1 changed file

pr created time in 19 days

issue openedxorg62/tty-clock

Co-ordinate order should be reversed

X in code is the Y on screen!

In the current version of tty-clock, the x variable represents the y axis on the Cartesian plain. Even the ttyclock.geo.x changes the position of the clock on the y axis.

This should be changed so that the x variable is a value on the x axis

created time in 21 days

issue openedlinkchecker/linkchecker

URL with encoded backslash is reported as 404

Summary

An URL with an encoded backslash is reported as 404, but actually it exists.

Steps to reproduce

  1. linkchecker https://golatex.de/wiki/%5Cdocumentclass

Actual result

URL        `http://www.golatex.de/wiki/%5Cdocumentclass'
Real URL   https://golatex.de/wiki/documentclass
Check time 0.310 seconds
Info       Redirected to `https://golatex.de/wiki/documentclass'.
Result     Error: 404 Not Found

Expected result

Reported as ok

Environment

<!-- replace the comments with the output of the commands -->

  • Operating system: Linux Mint 20
  • Linkchecker version: 10.0.0.dev2
  • Python version: Python 3.8.5
  • Install method: pip3 install git+https://github.com/linkchecker/linkchecker.git
  • Site URL: http://www.golatex.de/wiki/%5Cdocumentclass

Configuration file


#html output
[output]
log=html

#check external links too
checkextern=1

Logs

LinkChecker 10.0.0.dev2 Copyright (C) 2000-2016 Bastian Kleineidam, 2010-2020 LinkChecker Authors LinkChecker comes with ABSOLUTELY NO WARRANTY! This is free software, and you are welcome to redistribute it under certain conditions. Look at the file `LICENSE' within this distribution. Get the newest version at https://linkchecker.github.io/linkchecker/ Write comments and bugs to https://github.com/linkchecker/linkchecker/issues

Start checking at 2020-11-04 18:53:11+002


URL        `http://www.golatex.de/wiki/%5Cdocumentclass'
Real URL   https://golatex.de/wiki/documentclass
Check time 0.341 seconds
Info       Redirected to `https://golatex.de/wiki/documentclass'.
Result     Error: 404 Not Found

Statistics:
Downloaded: 0B.
Content types: 0 image, 1 text, 0 video, 0 audio, 0 application, 0 mail and 0 other.
URL lengths: min=37, max=37, avg=37.

That's it. 1 link in 2 URLs checked. 0 warnings found. 1 error found.
Stopped checking at 2020-11-04 18:53:12+002 (1 seconds)

Other notes

I guess the issue has something to do with escaping the backslash in the url and in python strings. Curl sees a HTML redirect on this url:

curl http://www.golatex.de/wiki/%5Cdocumentclass
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="https://golatex.de/wiki/%5cdocumentclass">here</a>.</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at www.golatex.de Port 80</address>
</body></html
```>
with -L it finds the right page (as shown in Firefox)

created time in a month

PR opened xorg62/tty-clock

added header file in dependencies

Added header file as a dependency for the building the binary

+2 -1

0 comment

1 changed file

pr created time in a month

issue closedlinkchecker/linkchecker

IEEE links trigger 'BadStatusLine' errors

Summary

DOIs that redirect to IEEE website lead to "Connection aborted" due to "Bad status line". Maybe this is a problem with IEEE website, but the same URL works fine in a browser.

Steps to reproduce

It reliably occurs for all DOIs that redirect to IEEE when run in my 'routine' mode, using the following command to check my entire web tree (running on a local copy). But when I run linkchecker only on one of the files that has the problematic link, it passes just fine!

linkchecker --check-extern --ignore-url 'research/project/.*' --ignore-url 'research/kotz-privacy/tangled-web.html' --ignore-url 'research/kotz-privacy/kotz-privacy.*html' --ignore-url 'research/bredin-position/bredin-position.html' --ignore-url 'research/kotz-future/kotz-future.html' --ignore-url 'research/kotz-future2/kotz-future2.html' --ignore-url 'research/choudhary-sdcr/choudhary-sdcr.html' --ignore-url 'research/choudhary-sdcr/ChoudharyFile.html' --ignore-url 'research/kotz-dapple/kotz-dapple.html' --ignore-url 'research/kotz-dapple/bib.html' --ignore-url 'research/kotz-diskmodel-sw/kotz-diskmodel-sw.html' index.html

Actual result

Look below for the second result, URL `https://doi.org/10.1109/BSN.2018.8329691'

LinkChecker 10.0.0.dev0              Copyright (C) 2000-2014 Bastian Kleineidam
LinkChecker comes with ABSOLUTELY NO WARRANTY!
This is free software, and you are welcome to redistribute it
under certain conditions. Look at the file `LICENSE' within this
distribution.
Get the newest version at https://linkchecker.github.io/linkchecker/
Write comments and bugs to https://github.com/linkchecker/linkchecker/issues

Start checking at 2020-07-07 20:16:06+002
10 threads active,   434 links queued,   54 links in 498 URLs checked, runtime 1 seconds
10 threads active,  1063 links queued,  542 links in 1623 URLs checked, runtime 6 seconds

URL        `https://publications.waset.org/pdf/10010313'
Name       `page'
Parent URL file:///Users/dfk/projects/web/research/batsis-development/index.html, line 24, col 3
Real URL   https://publications.waset.org/pdf/10010313
Check time 1.541 seconds
Result     Error: 404 Not Found
10 threads active,  1010 links queued,  595 links in 1653 URLs checked, runtime 11 seconds

URL        `https://doi.org/10.1109/BSN.2018.8329691'
Name       `DOI'
Parent URL file:///Users/dfk/projects/web/research/pope-eda-bsn/index.html, line 23, col 3
Real URL   http://ieeexplore.ieee.org/document/8329691/
Check time 1.485 seconds
Size       0B
Info       Redirected to
           `http://ieeexplore.ieee.org/document/8329691/'.
Result     Error: ConnectionError: ('Connection aborted.', BadStatusLine('¥ò\x91\x0c<ãôq8\x89Ãà<î\x14Wôê>ñÌ\x8fOL\x8b\xa0£ìl¤´¢\tH\x16±cN\x13P\x9c\x134a!Eaè\x9b0eCÍñÓ\x9cÀ<\x9b§\x98IË\x89ýùRÏ?\x01Í\x05\x8eÙZ\x17ÏQ\x14¶³ÄÃ0ü`RÀÂ8Í!òóyÈËÎ\x15v\x9e?}¶»û\x18w2¼`...

Expected result

No error.

Environment

  • Operating system: Darwin Kotzbook2019-June-2.local 18.7.0 Darwin Kernel Version 18.7.0: Mon Apr 27 20:09:39 PDT 2020; root:xnu-4903.278.35~1/RELEASE_X86_64 x86_64
  • Linkchecker version: LinkChecker 10.0.0.dev0 released xx.xx.xxxx Copyright (C) 2000-2014 Bastian Kleineidam
  • Python version: Python 2.7.17; Python 3.7.8
  • Install method: from git
  • Site URL: https://www.cs.dartmouth.edu/~kotz/

Configuration file

n/a

Other notes

What's really odd is that both of the following commands complete with no error!

linkchecker --check-extern research/pope-eda-bsn/index.html
linkchecker --check-extern https://doi.org/10.1109/BSN.2018.8329691

but this one, which should work, triggered several 'BadStatusLine' errors:

linkchecker --check-extern http://ieeexplore.ieee.org/document/8329691/

closed time in a month

dfkotz

issue commentlinkchecker/linkchecker

IEEE links trigger 'BadStatusLine' errors

This problem appears to have been resolved, either due to an update in linkchecker or in Python.

I'm now at LinkChecker 10.0.0.dev2 released xx.xx.xxxx Python 2.7.17 Python 3.9.0 Darwin Kotzbook2020 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64

dfkotz

comment created time in a month

issue commentlinkchecker/linkchecker

Suppress terminal output

@cjmayo : Hi, that's great; thank you for the response, and link! ( Btw, you were missing the source dir in your answer, above ;-) The -F redirect is perfect / elegant.

While the output below will show more errors than exist (due to moving/copying those HTML files there, for testing), the command works, well. :-)

[victoria@victoria docs_test]$ dpl

Wed Oct 21 01:49:43 PM PDT 2020
/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test

total 396
-rw-r--r-- 1 victoria victoria  22469 Oct 19 17:49 Adelson_Foundation.html
-rw-r--r-- 1 victoria victoria  16285 Oct 19 19:46 Black_Lives_Matter.html
-rw-r--r-- 1 victoria victoria   5681 Oct  5 21:48 Blackstone_Group.html
-rw-r--r-- 1 victoria victoria  18705 Oct 19 17:00 Cleta_Mitchell.html
-rw-r--r-- 1 victoria victoria  12156 Oct 20 09:35 Corporation-Facebook.html
-rw-r--r-- 1 victoria victoria  49940 Oct 19 15:49 heritage_foundation.html
-rw-r--r-- 1 victoria victoria   5716 Oct 20 10:50 Jeffrey_Ross_Toobin.html
-rw-r--r-- 1 victoria victoria  13617 Oct 17 19:30 linkchecker-test_file1.html
-rw-r--r-- 1 victoria victoria  12674 Oct 17 19:30 linkchecker-test_file2.html
-rw-r--r-- 1 victoria victoria  16295 Oct 20 11:29 Paul_Elliott_Singer.html
-rw-r--r-- 1 victoria victoria   9510 Oct 19 17:47 Preserve_America_PAC.html
-rw-r--r-- 1 victoria victoria 128543 Oct 19 15:15 sources.html
-rw-r--r-- 1 victoria victoria  12889 Oct 19 15:45 trump-erases-transgender-rights.html
-rw-r--r-- 1 victoria victoria  24640 Oct 19 15:46 trump_rewrites_health_rules-pence_sees_conservative_agenda_born_again.html
-rw-r--r-- 1 victoria victoria  22656 Oct 19 14:50 Vaccine_Choice_Canada.html

[victoria@victoria docs_test]$ time linkchecker --check-extern -r2 --timeout=90 --no-status --no-warnings \
    -q -F csv//mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/linkchecker_errors.csv \
    /mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/ \
    --ignore-url=cnp_members.* --ignore-url=google\.com
Command exited with non-zero status 1
0:39.50

[victoria@victoria docs_test]$ cat linkchecker_errors.csv | wc -lc
     79   36981

[victoria@victoria docs_test]$ tail -n4 linkchecker_errors.csv

george_soros-open_society_foundations.html#george_soros-open_society_foundations;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/sources.html;;URLError: <urlopen error [Errno 2] No such file or directory: '/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/george_soros-open_society_foundations.html'>;;;False;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/george_soros-open_society_foundations.html;1254;63;my version;-1;-1;0.000772953033447;0;2;
arabella_advisors.html#1630_fund;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/sources.html;;URLError: <urlopen error [Errno 2] No such file or directory: '/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/arabella_advisors.html'>;;;False;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/arabella_advisors.html;1260;352;Sixteen Thirty Fund;-1;-1;0.000463008880615;0;2;
dark-money-networks-fake-news-sites.html#soros-1630-01;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/sources.html;;URLError: <urlopen error [Errno 2] No such file or directory: '/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/dark-money-networks-fake-news-sites.html'>;;;False;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/dark-money-networks-fake-news-sites.html;1260;674;here;-1;-1;0.000444889068604;0;2;
# Stopped checking at 2020-10-21 13:50:35-007 (38 seconds)

[victoria@victoria docs_test]$ 

c.f. my previous (original) command:

[victoria@victoria docs_test]$ time linkchecker --check-extern -r2 --timeout=90 --no-status --no-warnings \
    -ocsv /mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/ \
    --ignore-url=cnp_members.* --ignore-url=google\.com \
    > /mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/linkchecker_errors.csv
Command exited with non-zero status 1
0:41.23

[victoria@victoria docs_test]$ cat linkchecker_errors.csv | wc -lc
     79   36977

[victoria@victoria docs_test]$ tail -n4 linkchecker_errors.csv
george_soros-open_society_foundations.html#george_soros-open_society_foundations;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/sources.html;;URLError: <urlopen error [Errno 2] No such file or directory: '/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/george_soros-open_society_foundations.html'>;;;False;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/george_soros-open_society_foundations.html;1254;63;my version;-1;-1;0.000150918960571;0;2;
arabella_advisors.html#1630_fund;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/sources.html;;URLError: <urlopen error [Errno 2] No such file or directory: '/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/arabella_advisors.html'>;;;False;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/arabella_advisors.html;1260;352;Sixteen Thirty Fund;-1;-1;9.89437103271e-05;0;2;
dark-money-networks-fake-news-sites.html#soros-1630-01;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/sources.html;;URLError: <urlopen error [Errno 2] No such file or directory: '/mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/dark-money-networks-fake-news-sites.html'>;;;False;file:///mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/dark-money-networks-fake-news-sites.html;1260;674;here;-1;-1;9.79900360107e-05;0;2;
# Stopped checking at 2020-10-21 13:56:33-007 (40 seconds)
victoriastuart

comment created time in a month

issue commentlinkchecker/linkchecker

Suppress terminal output

Alternatively:

Turn off terminal output: -o none, or its shortcut -q Output CSV to file: -F csv/<filepath>

linkchecker --check-extern -r2 --timeout=90 --no-status --no-warnings \
-q -F csv//mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/linkchecker_errors.csv \
--ignore-url=cnp_members.* --ignore-url=google\.com

https://linkchecker.github.io/linkchecker/man/linkchecker.html#output-options

victoriastuart

comment created time in a month

issue closedlinkchecker/linkchecker

Suppress terminal output

Apologies if this is a FAQ (I looked ...). I want to script the use of linkchecker and I don't want to see the output in the terminal.

I am using

linkchecker --check-extern -r2 -t5 --timeout=90 --no-status --no-warnings -o none \
-ocsv /mnt/Vancouver/domains/buriedtruth.com/1.0/docs \
--ignore-url=cnp_members.* \
--ignore-url=google\.com \
| tee /mnt/Vancouver/domains/buriedtruth.com/1.0/linkchecker_errors.csv

and I am getting all output echoed in the terminal.

closed time in a month

victoriastuart

issue commentlinkchecker/linkchecker

Suppress terminal output

Nevermind -- solution:

linkchecker --check-extern -r2 --timeout=90 --no-status --no-warnings \
-ocsv /mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/ \
--ignore-url=cnp_members.* --ignore-url=google\.com \
> /mnt/Vancouver/domains/buriedtruth.com/linkchecker-tests/docs_test/linkchecker_errors.csv
victoriastuart

comment created time in a month

issue openedlinkchecker/linkchecker

Suppress terminal output

Apologies if this is a FAQ (I looked ...). I want to script the use of linkchecker and I don't want to see the output in the terminal.

I am using

linkchecker --check-extern -r2 -t5 --timeout=90 --no-status --no-warnings -o none \
-ocsv /mnt/Vancouver/domains/buriedtruth.com/1.0/docs \
--ignore-url=cnp_members.* \
--ignore-url=google\.com \
| tee /mnt/Vancouver/domains/buriedtruth.com/1.0/linkchecker_errors.csv

and I am getting all output echoed in the terminal.

created time in a month

issue closedlinkchecker/linkchecker

move website out of this repo and into RTD

nevermind history, the whole git repo is still in doc/web here. we should just remove this whole thing - i don't understand why it's there and in the gh-pages branch. i understand the gh-pages is only a rendering, but we should not duplicate content like this without purpose.

note that those directories are not in the exported tarballs because of an exception in .gitattributes. we shouldn't need such a hack either. i also noticed a .rej file lying around the website which was detected during the debian package build as part of the last release in #141.

kind of a mess. :) IMHO, we should just use a standard Sphinx-style website instead of some weird site like this. then the site can link to GitHub releases and PyPI instead of duplicating tarballs like this. this would also make the repository much slimmer and easier to clone.

closed time in a month

anarcat

issue commentlinkchecker/linkchecker

move website out of this repo and into RTD

Let's tag it postponed then and close for now. If anybody wants to champion a move to RTD, we can reopen.

anarcat

comment created time in a month

issue commentlinkchecker/linkchecker

move website out of this repo and into RTD

I think we have a satisfactory solution now with automated publishing of Sphinx to the gh-pages branch. RTD can have advantages (e.g. if supporting multiple versions of documentation), but membership of another site to manage...

anarcat

comment created time in 2 months

push eventlinkchecker/linkchecker

cjmayo

commit sha 53e5532014177f9d6c0d441c3793f77eb2900670

Merge pull request #518 from cjmayo/edit-index Fix the edit link for the web site front page 21e5de90a68df65d337a1add865838bce1bb2909

view details

push time in 2 months

push eventlinkchecker/linkchecker

Chris Mayo

commit sha ffb1d9953c3d60577ba973542a6c4ecb11537f81

Fix the edit link for the web site front page

view details

Chris Mayo

commit sha 1adcea1ff42eb036b199c69724990bad3f2568ab

Merge branch 'master' into edit-index

view details

Chris Mayo

commit sha 21e5de90a68df65d337a1add865838bce1bb2909

Merge pull request #518 from cjmayo/edit-index Fix the edit link for the web site front page

view details

push time in 2 months

PR merged linkchecker/linkchecker

Fix the edit link for the web site front page

Oops... I didn't use the automated links because some pages are includes of other files, and the code pages are generated.

+1 -1

0 comment

1 changed file

cjmayo

pr closed time in 2 months

push eventlinkchecker/linkchecker

cjmayo

commit sha 55b091b6b33e5897da34666bf63c4cac20c89201

Merge pull request #516 from cjmayo/biplist Stop using biplist 838d61b8cafbc9d230bbcccc31a53fa8397fafc2

view details

push time in 2 months

push eventlinkchecker/linkchecker

Chris Mayo

commit sha e922dd0224467004bcd547dea871e28227e9963e

Stop using biplist plistlib has supported binary files since Python 3.4.

view details

Chris Mayo

commit sha 5ba15f0e0ebba631f87249e8cad6eb587d8a5002

Merge branch 'master' into biplist

view details

Chris Mayo

commit sha 838d61b8cafbc9d230bbcccc31a53fa8397fafc2

Merge pull request #516 from cjmayo/biplist Stop using biplist

view details

push time in 2 months

PR merged linkchecker/linkchecker

Stop using biplist

plistlib has supported binary files since Python 3.4.


Didn't spot this myself first time, but fortunately a comment was added to the Bitbucket bug.

test_safari_bookmarks_binary works with plistlib.

+5 -34

0 comment

6 changed files

cjmayo

pr closed time in 2 months

push eventlinkchecker/linkchecker

cjmayo

commit sha 5ab7db72064a1165d438eb906f367024e6ea913a

Merge pull request #517 from cjmayo/pages-root Publish documentation to the root of gh-pages 76dd3bc66aeccf46c2798a87b839caa15120223a

view details

push time in 2 months

Pull request review commentlinkchecker/linkchecker

Publish documentation to the root of gh-pages

 The Web Site is hosted by GitHub Pages from the docs/ directory of the gh-pages  When updates to LinkChecker are pushed, the web site is built and published automatically by a GitHub action ``.github/workflows/publish-pages.yml``.--For information, a manual process to build and publish the web site would look like:

I have added the setup.py build step, in its own section because it is needed for man pages too. That does mean our steps are recorded, even if there isn't a recipe for a manual publish. At the moment any merge will break the web site so best get this done.

Yes, two local ones. ghp-import looks interesting, although with the action there is no need to do anything, as long as it works...

cjmayo

comment created time in 2 months

push eventlinkchecker/linkchecker

Chris Mayo

commit sha a5706e233bfa5ff764966ed244184b8a5a297a34

Publish documentation to the root of gh-pages

view details

Chris Mayo

commit sha b1067236853e4147ae2b07e8ec4afd4bb6fdfc17

Add setup.py build step to documentation

view details

Chris Mayo

commit sha 76dd3bc66aeccf46c2798a87b839caa15120223a

Merge pull request #517 from cjmayo/pages-root Publish documentation to the root of gh-pages

view details

push time in 2 months

PR merged linkchecker/linkchecker

Publish documentation to the root of gh-pages

This gives the Action full control of the branch, it will wipe out anything else e.g. the .gitignore (that isn't needed though if not building in the same repository).

Before merging this I will push a manual commit to gh-pages moving the existing files to root, and point the GitHub pages to the root directory in the project configuration.

+7 -16

0 comment

2 changed files

cjmayo

pr closed time in 2 months

push eventlinkchecker/linkchecker

Chris Mayo

commit sha 7da2bb2bb5d7f9a759ff62d4c3bf58545da8dfef

Move from docs to the root directory peaceiris/actions-gh-pages always writes .nojekyll to the root directory.

view details

push time in 2 months

Pull request review commentlinkchecker/linkchecker

Publish documentation to the root of gh-pages

 The Web Site is hosted by GitHub Pages from the docs/ directory of the gh-pages  When updates to LinkChecker are pushed, the web site is built and published automatically by a GitHub action ``.github/workflows/publish-pages.yml``.--For information, a manual process to build and publish the web site would look like:

By "2 repositories" do you mean two checkouts of the same repository?

I've used ghp-import in the past, it works great. No extra checkouts needed, just a docs build in a local .gitignore'd directory that gets imported into the root of the gh-pages branch by the ghp-import script.

But I don't think we'll need that if we have the GH action.

cjmayo

comment created time in 2 months

more