Cloning Zwiki With Moinmoin
see 2009-10-17-ZwikiCloneChoice
Oct19'2009
-
install MoinMoin on my MacBook
-
easy enough to change a page to WikiCreole syntax
-
argh this kills WikiWord Automatic Linking!
-
trade some emails with WikiCreole people
-
Nov07
-
decide to start with MoinMoin native parser, tweak WikiWord rules (see
word_rule
intext_moin_wiki.py
(see Wiki NameExamples for edge cases)- basic success
-
MoinMoin SubWiki concept/syntax (slash) conflicts with my normal-english use of slashes between separate WikiWord-s.
-
now playing with bold/italic, trying to mimic Structured Text rules
-
hmm, getting close - both end up italic, is that good enough? (in `/MoinMoin/parser/text_moin_wiki.py')
-
then move along to playing with lists
-
have to turn double-linebreaks into single-linebreaks, I can live with that
-
ugh realize that asterisk for bullet is turning everything italic!
-
oops realize that's not true - was only having problem because my top-level items had no leading space, just jumping into bullet - need leading space
-
and in the process discover that I don't need single-linebreaks, it's ok with that or doubles.
-
-
-
-
went back to bold/italic - found issue - both cases call
emph_repl()
which forks inside, so had to tweak that -
going to play with the WikiLog homepage view
-
copy
macro/RecentChanges.py
toWikiLog.py
-
get body of
RecentChanges
page (via filesystem), copy into Front Page (via wiki edit form), tweaking macro call toWikiLog
. Rip out some bits don't want. Save. GetWikiLog
(with angle brackets) appearing in page. -
rename function
macro_RecentChanges()
tomacro_WikiLog()
- now get RecentChanges list on the Front Page.
-
Nov08
-
continue playing with Front Page
- copy
theme.init.recentchanges_entry()
towikilog_entry()
, call that from inWikiLog.py
, tweak function contents to output what I want (not yet worrying about including actual page content rendering). Success.
- copy
Nov09
-
trying to include rendered-contents of individual blog-bits nodes in Front Page
- stumbling blindly in the dark. Current key line is html.append(Page(d['pagename'], self.request).send_page()) which doesn't work.
Nov13
-
still working through rendering node contents in Front Page
-
figure out key issue is that containing page logic is building up an
html
string which it later returns, but core Page object parsing/rendering of WikiText just outputs to socket directly:Page.send_page()
callsPage.send_page_content()
which callsPage.format()
which callsparser.format()
(using the parser appropriate to the WikiText format of the page, e.g.text_moin_wiki.py') which calls
self.request.write()which is really implemented by
request.standalone.write()` which writes to socket? (I'm not positive I have this right....)
-
some options
-
find way to capture/reroute socket-write to append to a variable and return it (Python has some nice concepts in treating socket i/o like file i/o, but this is probably the "wrong direction", but will take a look). Update: yes, even though there are some cool ideas for redirecting stdout, I don't think this work makes sense without creating lots more weirdness.
-
hmm, or is there a way to capture
parser.format()'s call to
self.request.write()` -
change Front Page macro model to do socket output of all pieces, just like page rendering - suspect that isn't possible since macro is called from within a page with a different model...
-
copy/fork
parser.format()
to accrue/return a rendered page body, rather than calingself.request.write()
- maybe change original function to call a new function which either doesself.request.write()
or appends to a global variable, depending on some keyword trigger? This area sounds most realistic... -
other ideas?
-
-
Nov16
-
continuing Front Page render-multiple-nodes issue
-
hmm, does MoinMoin support any Transclusion? Ah, though the
content
action. MoinMoin:HelpOnActions.- hmm, don't see anything appropriate in the
action
directory of code...
- hmm, don't see anything appropriate in the
-
in
action/init.py
finddo_content()
but it just callsdo_show()
which just callsPage.send_page()
so that puts us back in the same hole...
-
Nov18
-
continuing Front Page render-multiple-nodes issue
-
MoinMoin:HelpOnLinking shows double-curly-bracket notation for embedding, though examples are all images. Try within a single regular wiki page:
-
remote image url: works
-
remote page url (with and without
.html
suffix): doesn't work, doesn't even link -
local wiki page - hmm, seems to work, but text shows up in weird little scrolling text-area - yes confirmed that by including really long page - just got ~10 lines of height displayed. Interesting...
-
-
try just including the pagename within double-curly-brackets in code - the static string gets rendered, not the page contents. (Tried triple-curly-brackets, too.) (Not surprised, but worth a shot...)
-
normal transclusion happens where
parser._transclude_repl()
(withword
as the unicode pagename) callsformatter.transclusion()
and thenformatter.text()
- that call also has groups= {{{
{u'comment': None, u'link_params': None, u'word_name': None, u'remark_on': None, u'tableZ': None, u'url_scheme': None, u'parser': None, u'emph': None, u'entity': None, u'emph_ib_or_bi': None, u'sub_text': None, u'table': None, u'interwiki_wiki': None, u'transclude_params': None, u'small_on': None, u'sub': None, u'interwiki_page': None, u'tt': None, u'tt_bt_text': None, u'parser_args': None, u'tt_text': None, u'li': None, u'link_target': None, u'hmarker': None, u'tt_bt': None, u'transclude_target': u'Portal Collaboration Roadmap
, u'sup': None, u'strike': None, u'macro_args': None, u'interwiki': None, u'email': None, u'transclude_desc': None, u'word_bang': None, u'dl': None, u'transclude': u'{{Portal Collaboration Roadmap}}
, u'macro_name': None, u'small_off': None, u'big_off': None, u'li_none': None, u'parser_unique': None, u'smiley': None, u'url_target': None, u'big': None, u'sgml_entity': None, u'parser_nothing': None, u'word': None, u'strike_off': None, u'heading_text': None, u'remark_off': None, u'remark': None, u'ol': None, u'indent': None, u'link_desc': None, u'url': None, u'macro': None, u'strike_on': None, u'big_on': None, u'parser_line': None, u'rule': None, u'sup_text': None, u'parser_name': None, u'u': None, u'emph_ibi': None, u'small': None, u'link': None, u'heading': None, u'emph_ibb': None, u'word_anchor': None} }}}
- that call also has groups= {{{
{u'comment': None, u'link_params': None, u'word_name': None, u'remark_on': None, u'tableZ': None, u'url_scheme': None, u'parser': None, u'emph': None, u'entity': None, u'emph_ib_or_bi': None, u'sub_text': None, u'table': None, u'interwiki_wiki': None, u'transclude_params': None, u'small_on': None, u'sub': None, u'interwiki_page': None, u'tt': None, u'tt_bt_text': None, u'parser_args': None, u'tt_text': None, u'li': None, u'link_target': None, u'hmarker': None, u'tt_bt': None, u'transclude_target': u'Portal Collaboration Roadmap
-
Nov19
-
continuing Front Page render-multiple-nodes issue
-
hack minimal copy of key pieces together into
theme.wikilog_entry()
- it works! -
now look at it, realize that's it's a client-side include using the
object
tag.-
how many browsers will choke on it?
-
will this matter for SEO? Not really, since (a) link to blogbit page isn't inside the include, so it will get scraped, and (b) don't really want blogbit stuff getting indexed as part of Front Page anyway.
-
how much control over styling do I have with this? Doesn't appear to be a setting that lets the height auto-adjust to fit whatever gets inserted. Might any of the CSS properties have value?
-
-
bugger, just realized that it turns every WikiWord (in a transcluded page) into a link that makes it look like the associated page already exists. That's part of the
action=content
logic.
-
-
think about design a bit - see today's notes on Wiki page
-
realize this client-side Transclusion model ain't gonna cut it for the RSS page. And that feed is probably more important than the Front Page look. So going to focus on that.
-
but first...
-
want to play with embedding sidebar boxes onto Front Page
-
note that
EmbedObject
macro could be useful - no, it renders to "Attach file" -
ah,
Include
macro - that works nicely, and even renders Wiki Name appropriately as to whether matching page exists or not.- look at code - aha! it uses
Page.send_page()
having captured the output withrequest.redirect()
! Now going to go back to Front Page code.
- look at code - aha! it uses
-
Nov19
-
continuing Front Page render-multiple-nodes issue
- Copied/tweaked
Include
logic forwikilog_entry()
and now Front Page works great!
- Copied/tweaked
-
also used
Include
for sidebar boxes on Front Page. They render fine, but can't get them valign-top. Not going to worry about that for now, going to move to RSS feed. -
RSS feed
-
copy/tweak code from
wikilog_entry()
torss_rc.execute()
-
it "works" - looks nice in FireFox
-
need to validate it
-
update: validates, though get a warning, and actual body looks pretty weird, like lots of history/diff items buried away in there.... (part of the issue for me is that this is RSS v1.0 - RDF)
-
test with Net NewsWire or something
- update: doesn't complain, but also doesn't give me any items.
-
-
seems to be treating every WikiWord like it has a matching page - ah, the issue is that the visual difference in Front Page is because of CSS, which doesn't apply in RSS. I may just live with that for now.
-
-
Nov21
-
RSS feed
-
realize I should validate a single wikipage HTML, and then the Front Page HTML, before looking at validation of the RSS...
-
individual pages: validate fine
-
Front Page: many errors. Mostly "undefined element", or "closing P which is not open", etc. Suspect conflict between state HTML version and "real" version (strict vs transitional flag, etc.)
-
even under Transitional, many errors
-
ah, rookie bug (still in my Zwiki pages) of surrounding node tables with p-tags.
-
removed those, now pass in Transitional
-
-
still fail (14 errors) in Strict because of those attributes, etc.
-
-
Nov23
-
Front Page HTML validation errors
-
made some changes, still have 6 errors all from old font-size tags.
-
changed to span with
font-size:smaller
tags, and now pass as Strict!
-
-
RSS...
-
W3C validator says good, with just 1 warning:
description should not contain relative URL references: /Novel
-
Validome says there are 51 errors, starting with "Attribute
rdf:resource
is not permitted to appear in elementrdf:li
."-
it also rejects the RecentChanges feed from the MoinMoin site itself.
-
and the RecentChanges feed from MeatBall.
-
could part of the problem here be that the wiki namespace is defined as a P[[URL]] which is broken/missing? See MeatBall:ModWiki
-
this spec still points to that P[[URL]]. It's the latest version, but it's "draft 0.5" from 2001!
-
Community Wiki uses FeedBurner. It includes the wiki namespace, but validates at Validome. But it looks like it's using Atom Standards, not RDF
-
-
I took a dump of my new feed, and replaced every
rdf:li rdf:resource
withrdf:li resource
- that left me with 27 errors, all of which are fromwiki:version
anddc:contributor
-
the only other Dublin Core tag the feed has is
dc:date
which doesn't seem to generate complaints -
dc:contributor
looks like a valid value to me
-
-
-
Step back: why do I care about this? Esp if W3C validator (and FireFox) is happy?
-
because Net NewsWire doesn't seem to recognize any news items in the feed.
-
ThunderBird identifies items nicely.
-
News Life identifies items also.
-
-
I hereby declare the RSS feed to be working.
-
Front Page design
-
got right column vertical-top like I wanted it
-
remaining sidebar issues:
-
each one include title, linked to that page? I just did that in manual edit of Front Page for now.
-
hmm, included pages have different rendering of paragraph-brakes - you just get a linebreak with no empty line of space. Is that a CSS thing?
-
separate border/bgcolor for each sidebar item? Right now they just flow together
-
make that whole column more obviously different than the left column?
-
(at some point have to deal with rest of page design - header/footer)
-
-
Nov24
- Front Page design: as of 15:59 I am satisfied for now
Nov30
-
header design
-
edit
wikiconfig.py
in moin root folder; plusscreen.css
inside.../wiki/htdocs/modern/css/
plus parallel directories -
debate removing stuff, decide just to put ribbon items on right instead of left side. Then deleted some stuff from the navi_bar
-
hmm, editbar looks fine in header, messy in footer.
-
do I even want it in both places?
-
hmm, who should see it at all?
-
going to play with user-login stuff to try and get more clear rendering of my-view vs reader-view.
-
-
Dec01
-
-
create my own user acct; edit
wikiconfig.py
to give self full rights, read-only to "All" -
bring up new Safari window to see anonymous view
-
Edit options now missing from editbar (but the bar is still there, for "Info", "Attachments", etc.)
-
argh, see that editbar and navbar items are not flush-right as they are in FireFox! (And, in FireFox, the bottom Edit Bar is a mess, seeming to get grouped in with the "credits" footer, which follows. Try to tweak this without effect.) Irritating. Decide to not doing anything for now...
-
-
Dec02
- migration: start scripts: scrape (finish?), convert, upload
Dec04
-
migration: finish convert script
-
What should I do next?
-
spaces within WikiWords?
-
Visible Backlinks? yes this one
-
-
-
click on page-link to see current list
-
thinking that it makes sense to see how Sister Site/Twin Pages are handled/rendered, to mimic that.
-
Dec07
-
-
code refers to
sistersites
, nottwinpages
,twinsite
,twinwiki
-
put
sistersites =
list of tuples inwikiconfig.py
, restarted server, nothing obvious happened.... -
ah, hit localhost/Bill Seitz/action=pollsistersites, and it ran a long time then gave me count of pages from the 3 sites I defined in wikiconfig
- but still don't see anything on any particular page... even after editing
-
it seems like code in
theme/navibar()
should be doing the job... -
it looks like sisterpages data being cached is a messed up dictionary - parsing isn't working right - for all 3 sites
-
Dec08
-
-
Thought it was working at <http://moinmo.in/Twin Pages> but it turns out that just has manual InterWiki links at the bottom.
-
core issue is that polling code wants a plaintext list of pages and urls, while sites seem to just have AllPages pages that give HTML linking list of pages. So polling code fails. Either need new code or to find new url to poll...
-
do I bother working on this, or get back to the more central Visible Backlinks thing? The latter.
- note that the code to show the Twin Pages is in
theme/navibar()
- it probably makes sense to put the Visible Backlinks right in the same place.
- note that the code to show the Twin Pages is in
-
Dec09
-
-
Should I consider client-side-include like with
{{...}}
(see Nov18 above)?-
Make that piece ASynch, which could be nice
- Because I haven't given any thought to Scalability/caching of this stuff
-
Might be bad for SEO. Though if title/header links back to same stuff, maybe that doesn't matter.
-
Going to give it a try...
-
-
If I call fullsearch, it returns a whole page (headers/footer, etc.). Next: copy fullsearch to something else, rip out wrapper bits? (Or add logic to fullsearch to suppress those few lines?) (What's the simplest first step to spike the architecture?)
-
copy Transclusion bits from earlier version of Front Page, start tweaking
-
hmm, it's using RecentChanges instead of current pagename! (Plus including wrapper, as expected.)
-
ah,
pagename
is what it's looping through for navibar - instead usecurrent
. That worked!-
also, note that this include is showing up before the page body. More CSS fun, I'm sure. Ignoring that issue for now.... (To Do)
-
also, note this gets included in pointless pages like Front Page (maybe that's ok), and actual Title Search (not ok - To Do)
-
-
-
now get rid of wrapper
-
use flag argument
content=content
for ignoring wrapper?-
Actually, going to try using
context=-1
(context is normally used to define the num of chars to display around matching terms in results item).-
oops, that gets over-ridden because titlesearch=1
-
ok, tweaked the logic around adjusting context based on titlesearch
-
-
suppressed header and footer if context<0 - looking promising (results still too verbose)
-
results too verbose
-
tweak context switch to avoid calling results-with-context
-
do better: add special case for context==-1 calling pageList() but with other param values - excellent, now just get straight list of links as ul.
-
-
-
-
next To Do
-
label for list
-
CSS to turn into flat list instead of vertical list?
-
CSS to put at bottom of page
-
logic to suppress this object on special pages like Title Search
-
-
Dec10
-
Visible Backlink: tweaking/formatting
- label for list: done
-
side-thought: multi-site farming: if I dump my acting-up-again Nokia N810 and get a fancy Mobile, should I try moving my Private Wiki to the cloud?
Dec15
-
site up: doing scraping
-
lots of manual tweak/restart for non-ASCII chars
-
134 failed scraping - server crashing
-
files: 13,538
-
list: 13,545
-
ah, problem is with files differing just in capitalization! Privacy, WeblogsCom, OhmyNews, ReST, Outsourc Ing, Globalization, Consumer
-
discovered
diary
directory! 96 files all scraped.
-
Dec27
-
side-note: early BlogBit pages with alpha suffix instead of descriptive filename. There are ~640 of them. (Considering reviewing and renaming them before uploading....)
-
scraping: resolve capitalization issues
-
process
-
identify specify dupes (trips?)
-
pick 1 winner
-
regex files with "wrong" link
-
merge content of dupe pages
-
-
WeblogsCom/Web Logs Com: choose WeblogsCom - still need to merge nodes
-
OhmyNews/Oh My News: choose Oh My News - still need to merge nodes
-
ReST/REST
-
OutSourcing/Outsourc Ing: choose OutSourcing
-
Globalization/Global Ization: choose Globalization
-
Con Sumer/Consumer: choose Consumer (nothing even pointed to the other one)
-
done!
-
Dec29: set up Blue Host account, set up domains (move teamflux.com and wikilogs.com but not fluxent.com)
Jan21'2010:
-
deploy Blue Host
-
comment out Visible Backlinks code for now
-
zip code, copy to Blue Host
-
Feb12: Gloria W says she's still too busy to deal with
Feb13:
-
basic static web
-
hit http://www.teamflux.com/ - get one of those generic pages
-
go into Blue Host C Panel, then click into File Manager. Edit
default.html
in teamflux directory. Hit URI again, now get that new page! -
look at HTTP response header - running Apache 2.2.14 unix version.
-
Feb15
-
find official support doc on using Django, so obviously can get Python working.
-
get SSH turned on for account
-
see that Python 2.4.3 is already available
Feb16
-
use wget to download v1.8.7 raw code to my space.
-
want to get raw original code working first, then change my pieces
-
my code still at 1.8.5. Want to
-
get my changes in synch with 1.8.7
- why not jump to 1.9.1? just expect it to be easier to update my code?
-
-
-
argh changing my mind: 1.9x gives WSGI and improved Xap Ian Search Engine.
-
really want to get things into a Version Control System - SubVersion or a DVCS?
- Gloria W says SubVersion is fine. Also note that Basie (TracWiki clone in Django) has SubVersion browser, nothing for DVCS options yet.
-
how handle periodic reintegration? See 2nd answer here
-
Feb17
-
install moin-1.9.1 on my machine
-
starts with just 1 default page until you install some stuff, which I don't do (yet)
-
copy
.../wiki/data/pages/*
from old space to new space. Hitting one of those names now works. (Hitting home page doesn't give you Front Page until you do some config work.) Rendering obviously not kosher yet. -
back to branch-handling issue. Do I bother with SVN or just hack stuff by hand?
-
reconsider Version Control System process. I posted this to the same thread.
-
start with core source (1.8.5). Check in as trunk.
-
Make changes, checking in right to trunk (no branch).
-
When hit bunch of changes ready to roll out, tag as a release.
-
When there's a new version of core project you want to integrate:
-
make a branch for it.
-
overwrite files with source (hmm, file mod-dates could be an issue)
-
merge trunk into branch (to avoid fucking up the trunk). Make sure it all works.
-
merge into trunk, tag a release.
-
-
-
decide to bite the bullet, and start using SVN. Discover my old Mac Personal SVN Server got wiped out in disk crash. Not that there was anything in it. Decide I'm going to try to work with file:// uri's instead. Download SvnX.
- svnadmin create /users/billseitz/documents/svnRepository
-
hmm, how get above process started with existing work? And how deal with changing directory names? (Gloria W says my symlink from
/moin/
to current-active directory. But I'm leaning toward renaming...) Plan:-
take orig 1.8.5, rename just
moin
. -
Bill-Seitzs-MacBook:documents billseitz$ svn import moin file:///users/billseitz/documents/svnRepository/moin -m "Initial import of moin-1.8.5 source"
-
delete moin directory, then
-
svn update file:///users/billseitz/documents/svnRepository/moin
-
hmm, all these files have now as mod-date. That could be an issue.
-
-
copy changed files in my fork over the original source (don't want to just swap the whole folders because of .svn files)
-
launch. Seems to work OK.
-
hrm, Front Page isn't the RecentChanges list
-
RecentChanges page itself doesn't have any content - needs edit log?
-
hmm, notice that all pages seem to be Immutable. Or, I need to log in - jump to next day
-
-
svn update
,svn commit -m "all my changes through Feb01"
-
make branch for 1.9.1
-
copy every directory of stuff in there? Maybe try emptying the directory and then copying top level over? (Will lose any .svn files, but maybe that's ok...)
-
merge trunk into branch, get working, merge branch back into trunk...
-
-
Feb20
-
getting changes copied over
-
Can I edit pages if I log in? Yes. But first I had to re-create my own account, which it let me do.
-
Having logged in, now RecentChanges gives me options for changing date-range. But even maxdays=90 gives me no content. Is it reading editlog? Yes, /wiki/data/edit-log
-
edit-log has newest stuff at bottom
-
timestamp looks like "1260234380000000"
-
note that each content page is a directory, which contains all the versions of that page, and its own edit-log
-
should I hack the edit-log after I mass-input stuff?
-
copied over edit-log, now RecentChanges looks nice. Edit Bill Seitz page, and RecentChanges updates. Links on RecentChanges work fine.
-
-
Front Page still static.
- hadn't copied change to Front Page into new directory, I guess. Copied. Still the static page. Restart server: bing! List of changed nodes, content of WeblogBits, sidebar pages, etc.
-
RSS feed? Looks fine in FireFox. Assuming it's fine for now.
- should add icon to Front Page plus link=rel. But not yet.
-
SubVersion checkin time.
-
lots of files to add, use this method to bulk-add
-
did
svn commit
-
-
SubVersion - make branch for 1.9
-
Bill-Seitzs-MacBook:moin billseitz$ svn copy file:///users/billseitz/documents/svnRepository/moin/trunk file:///users/billseitz/documents/svnRepository/moin/branches/moin191
-
result: svn: Path
file:///users/billseitz/documents/svnRepository/moin/trunk
does not exist in revision 2 -
open repository in SvnX - confirm that there's no trunk/branches hierarchy created! So I need to re-org.
-
start to use SvnX to create trunk/branches directories, then delete to instead....
-
try using
svnadmin dump
per here-
get error "Can't open file `/users/documents/billseitz/svnRepository/format': No such file or directory"
-
moved all pieces using SvnX instead.
-
-
use
svn copy
to make branch again. Fail: svn: Could not use external editor.... Oh duh needed to include commit message. Now good. (At revision 23 because all those moves had to be of single item at a time...). Then just dumped that directory and made the raw 1.9.1 directory/moin/
.
-
-
Feb23
-
/moin/
is now the original 1.9.1 - want to merge changes into it-
svn merge -r 1:2 file:///users/billseitz/documents/svnRepository/moin/trunk
-
response: svn:
.
is not a working copy - ah probably because no .svn files in there, because of how I wiped it. -
let's try different approach. Dump that branch, then re-create it by importing the raw source.
-
inside /moin/ did svn import file:///users/billseitz/documents/svnRepository/moin/branches/moin191 - "make branch from source"
-
then have to do
svn checkout
to make it a working copy, move/rename directories. -
svn merge
- huh, that "is not a working copy" message again! -
try checkout again, leave directory as
/moin191/
, then try merge... -
svn: Unable to find repository location for
file:///users/billseitz/documents/svnRepository/moin/trunk
in revision 1
-
-
looking seriously at Mercurial. Let's define the process with their vocabulary:
-
if you're working on 2 branches at same time, each has/is its own repository
- so probably want a parent directory to be the project holder.
-
to merge 2 repositories, you first
pull
from one into the other, then youmerge
(in that destination repository), then youcommit
- you might even
clone
1 first to get new name, then pull the 2nd, etc...
- you might even
-
hmm, thinking I'll use
clone
to pull in raw sources 1.8.5 and 1.9.1, since MoinMoin uses Mercurial. That might make merging my changes into 1.9.1 easier...
-
-
start using Mercurial
-
install binary from http://mercurial.berkwood.com/
-
executable is in
/usr/local/bin
-
created global config file
~/.hgrc
to hold my name/email per http://blogs.sun.com/edwingo/entry/using_mercurial_on_mac_os -
make empty
/moin/
directory -
try hg clone http://hg.moinmo.in/moin/1.8.5 moin185
-
result - abort: requirement
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
not supported! -
conclude it's returning a web page, not a set of files.
-
go to http://hg.moinmo.in/moin/1.8 in browser, fine
files
link for 1.8.5 -
hg clone http://hg.moinmo.in/moin/1.8/file/294b97b991d3 moin185
-
just going to copy my changed version over those files and then commit!
-
did the copy of all files into
/moin/moin185/
-
from inside
/moin/moin185/
didhg commit -A -m "Commit my ..." (getting r4492) (tagged
my185') -
launch server (run `wikiserver.py'), hit http://localhost:8080/ - get nice Front Page, nice RecentChanges (last edits were Dec08'2009)
-
-
go to http://hg.moinmo.in/moin/1.9 to get files url for 1.9.1, do
hg clone ... moin191
-
should set up some ignore patterns for *.pyc and *.py~ files.
- go into moin185, remove those 2 sets of files, plus
*copy.py'; commit (two batches). Change
my185` tag to that r4495.
- go into moin185, remove those 2 sets of files, plus
-
-
Feb26
-
Mercurial continued: how get 1.8.5 changes merged into 1.9.1?
-
did
hg incoming
- really too much stuff to be helpful, and don't want to trust any auto-merging bits. -
well, maybe try trusting, if I'm sure I'm not taking the original 1.8.5 bits. Need to limit to my changesets.
-
huh, can't seem to use range of changesets! (tried with
hg incoming
and got "abort: unknown revision4492:4495
!") -
since I committed all my 1.8.5 changes at once, should be easy to apply just that changeset (though it may include those stupid .pyc files etc.)
- edited
/moin191/.hgignore
file - already had .pyc case, added .py~ line.
- edited
-
hmm, Mercurial Queues might be an approach... the UseCase explanation sounds like a good fit... ugh, but sounds like a patch "recorder", hard to back-generate patches.
-
-
hmm, maybe not an issue at all! Just did straight
hg incoming
without any revision specified, and it just listed my changes 4492+. -
so I'm just going to let it run! pull, merge, commit!
-
did
hg pull ../moin185
- "added 5 changesets with 323 changes to 312 files (+1 heads)" ugh that's a lot
-
did
hg merge
-
"abort: outstanding uncommitted changes (use
hg status
to list changes)" -
ah, because of my change to hgignore - committed just that file
-
now prompting for what to do with deleted file - these are files which are in 1 tree but not the other
-
if "local deleted" that means missing from 191 - pick
delete
-
if "remote deleted" that means missing from 185 - pick
change
-
hmmm, losing entire
/wiki/request/
directory - let it go
-
-
then lots of "failed" merges
-
final "191 files updated, 24 files merged, 3 files removed, 78 files unresolved; use
hg resolve
to retry unresolved file merges orhg update -C
to abandon"-
ugh, that
hg update -C
only applies to the whole directory, not individual files. So I guess I resolve what I can, then do-C
at the end -
ends up with 2 files, one with the raw name, and one with
.orig
extension- the raw-named file is the diff file
-
-
-
Mar02
-
Mercurial merging
-
looking at /MoinMoin/actions/Attach File.py
-
argh, nothing seems to match the separately downloaded 1.9.1 version!
-
hmm, maybe because the version I
pulled
had some more incremental changesets. So should find that exact version to download someplace.
-
-
Mar03
-
Mercurial merging
-
cloned fresh copy of 1.9.1 to compare (saved as moin191_pure)
-
compare pure to /MoinMoin/action/Attach.py.orig - still lots of changes, though I didn't touch this file!
- just totally overwrite pure on top of merged, then do
hg resolve -m Attach File.py
- just totally overwrite pure on top of merged, then do
-
same for some others in
/action/
that I hadn't touched -
but I did touch
/action/fullsearch.py
-
now the pure file matches the fullsearch.py.orig file!
-
but the merge-file shows some of my changes (in the 2nd-side) that are in neither.!
-
hmm, sudden thought - will handle all the ones I didn't touch, then do a commit (can I do that?), then do the ones I did touch.
-
-
-
Mar08
-
Mercurial merging
-
haven't done anything else since previous post
-
try: hg commit -m "commit /action/ files I didn't even touch"
-
get "abort: unresolved merge conflicts (see hg resolve)"
-
hmm, trying
hg resolve -l
within actions directory still gives me all failed merges -
next plan: do the fix on the 1 file left in the
action
directory I touched, then try to commit just that directory. Is that worth it? It doesn't give me the nice separation between things that I forked vs these other mystery conflicts.-
alternative: try to commit single file (or list of files) using
-i
that I've cleaned up.- nope, it won't let me
-
-
-
Mar11
-
Mercurial process
-
not going to continue this messy merging
-
instead
-
dump 191 work
-
clone fresh 191 from source
-
manually update files from 185 - log might help, plus I will get even better at putting my name in comments at fork bits
-
turn on feature discovered previously to make it easier to do this in the future... MercurialQueues.
-
-
edit
~/.hgrc
to turn on MQ -
realize we're at 1.9.2 now - http://hg.moinmo.in/moin/1.9/file/ced05deb11ae
-
go into
/moin/
, dohg clone <http://hg.moinmo.in/moin/1.9/file/ced05deb11ae> moin192
- seems to hang (at "adding file changes" part). Killed it, tried again. It finally completed.
-
go into moin192 and do
hg qinit
-
Mar13-22: screwing around reading Joel Spolsky Hg Init Mercurial tutorial.
Mar23
-
Mercurial: continuing Queues work.
-
re-tweak /moin192/.hgignore for
*.py~
-
do
hg qnew fork.patch
-
make changes in
/MoinMoin/actions/
directory
-
Mar31
-
Mercurial Queues
-
realize that never unpacked
/moin192/wiki/underlay.tar
, so do it now-
crap will this muddy my Queue? Is there a way to leave this out of the patch?
-
decide to delete the
/underlay/
directory and see if I can get away with it.
-
-
copy all of
/moin185/wiki/data/pages/*
to/moin192/
- except for Bad Content page which is only one already there in 192 -
launch moin192 to see what happens
-
dang it won't launch because of missing underlay - decide to untar it, then make sure to re-delete before any committing.
-
now asking for root gives that message page about needing to install language packages/pages. Ignoring that.
-
asking for Front Page directly gives me appropriate page, except that
WikiLog
action main area doesn't render (getWikiLog
in dbl-angle-brackets). -
copy
/MoinMoin/macros/WikiLog.py
over - that's the only changed/added item in that directory. Relaunch. -
Now get nothing in that space. Need content changed?
-
Look at RecentChanges - no content there either. Try to Login, have to re-create my account, then it lets me log in and gives me links for longer time horizon on RecentPages. Click for 90days, still get nothing.
-
copy over
/wiki/data/editlog
- now OK (last change was Dec9) (it gives you some number of most-recent-changes regardless of age) -
try Front Page again - now get error msg in
WikiLog
panel because haven't updated/MoinMoin/theme/__init.py__
. Change that file, relaunch. -
Now Front Page looks good. (Note that rendering of my WikiText hasn't been updated in this version yet.)
-
Edit Wiki Sandbox page. Front Page gives fresh list, so does RecentChanges and RSS.
-
-
Apr01
-
Continuing changes on moin192
-
note have not committed anything in this fork
-
/MoinMoin/formatter/*
- weird,text_html.py
has change-date, but contents are identical to 185_orig -
/MoinMoin/parser/text_moin_wiki.py
- change, reload. Suspect caching is causing issue, don't see change on a given page until edit/save it. -
looking at changes to CSS
-
in moin185 they were all within
/wiki/htdocs/{case}/css/
-
in moin192 they are within
/MoinMoin/static/htdocs/{case}/css/*
-
the only changes I made were in
screen.css
for case in (modern, modernized) (lots more changes in modern than modernized - is that correct?)-
changed in moin192, reloaded
-
blech ugly bullets in header/footer ribbons!
-
-
-
Apr02
-
continuing changes to moin192
Apr06
-
continuing changes to moin192
-
/wikiconfig.py
- note different place for static items like logo -
still getting ugly bulles in header/footer
-
look in FireBug - realize it's including 2 different screen.css files! Hmm, is it?
-
don't find 2 calls to any screen.css in view-source
-
but do see that projection.css seems to import screen.css
-
but it did that in moin185 too!
-
hmm, now see that in moin192 we're getting
/modernized/
but in moin185 we're getting/modern/
!
-
-
Apr14
-
continuing changes to moin192
-
make modern-screen.css match modernized
-
now have bullets, and they go down instead of across!
-
duh! Have been using
#
as comment flag, should be using/* ... */
- and also realize that there were some things I hadn't copied over. -
Now looks good!
-
-
click on page title/header to find Back Links - yikes get error!
- discover that
fullsearch.py
has mix of tabs and spaces (not my fault) - make all spaces, now good.
- discover that
-
time to commit and deal with Queue
-
edit
.hgignore
to ignore/data/pages/
and/wikiserver.app/Contents/
-
realize that
.hgignore
already includes/underlay/
so I don't have to delete that unzipped directory. -
hg status
gives {{{ M .hgignore M MoinMoin/Page.py M MoinMoin/action/fullsearch.py M MoinMoin/action/rss_rc.py M MoinMoin/action/sitemap.py M MoinMoin/parser/text_moin_wiki.py M MoinMoin/search/results.py M MoinMoin/theme/init.py M MoinMoin/web/static/htdocs/modern/css/screen.css M MoinMoin/web/static/htdocs/modernized/css/screen.css M MoinMoin/wikiutil.py M wikiconfig.py ? MoinMoin/macro/WikiLog.py ? MoinMoin/web/static/htdocs/common/fluxent.jpg }}} -
not really clear on interplace of qrefresh, qcommit, and regular
hg commit
-
this specifically says to not use
commit
. It says to usehg qrefresh && hg qcommit
-
Try that, get
abort: no queue repository
-
go into
/.hg/patches/
see it has no/.hg/
subdirectory, so it isn't a repository - should have donehg qinit -c
-
that same page says I can fix this by going into
/.hg/patches/
and doinghg init
, which I did -
now try again
hg qrefresh && hg qcommit
- it saysnothing changed
-
do
hg status
now it just lists the to-be-added files -
hg log
shows latest changeset is today5629:8c28067bb7e9
-
do
hg qrefresh
- no output -
do
hg add
for each of the 2 files needing to be added. (Hmm note that patches don't work with binary files, which applies to the image file. We'll see...) -
now try again
hg qrefresh && hg qcommit
- it again saysnothing changed
. But nowhg status
doesn't list those added files. -
I'm going to assume I'm done with this step. Time to get it deployed!
-
-
Apr16
-
deploy to Blue Host
-
want to use WSGI - start at <http://moinmo.in/HowTo/Apache With Mod W S G I>
-
jump to http://master19.moinmo.in/InstallDocs/ then http://moinmo.in/MoinMoinDownload
-
consider zipping my repository then uploading. But zip is 200MB? Forget it.
-
start with straight original source - SSH into Blue Host, wget http://static.moinmo.in/files/moin-1.9.2.tar.gz
-
do
tar xvzf moin-x.x.x.tar.gz
-> lots of unzipping -
cd into directory, do
python wikiserver.py
it gives me status info that it's running on port 8080 -
try to hit http://teamflux.com:8080/ from MacBook, get no response; try /wiki also nothing.
-
open another SSH window,
wget <http://localhost:8080/
> and it works.
-
Apr17-19
-
deploy to Blue Host
-
state: having gotten built-in server working, now get working with Apache under CGI, then change to WSGI.
-
in http://master19.moinmo.in/InstallDocs/ find
Running MoinMoin with CGI, FastCGI, SCGI or AJP
-
suspect need to move moin.cgi into Apache's cgi directory. Then edit config lines inside moin.cgi
-
copy
/wiki/server/moin.cgi
into~/www/cgi-bin/
-
hit http://www.teamflux.com/cgi-bin/moin.cgi - get 500 error
-
edit
moin.cgi
forsys.path.insert(0,'/home/teamflux/moin-1.9.2')
-
still get error on
from MoinMoin.web.flup_frontend import CGIFrontEnd
sayingImportError: No module named MoinMoin.web.flup_frontend
-
equiv error if tweak moin.cgi to just have
import MoinMoin
-
but if mimic this process from python command-line, it works fine.
-
whoops, somehow I got it right sometime! Though page looks pretty weird. Because all the CSS etc points to
/moin_static192/...
like/moin_static192/modernized/css/common.css
(note that this is really in `/MoinMoin/web/static/') -
do
ln -s /home/teamflux/moin-1.9.2/MoinMoin/web/static/htdocs moin_static192
to make symbolic link - now pages looks nice. -
but note have to provide url like <http://www.teamflux.com/cgi-bin/moin.cgi/Bill Seitz> so need to do rewrites - but after getting working under WSGI (?).
-
(also note haven't got multiple spaces working either, nor any other config changes, as this is still stock code.)
-
also note IP# is
66.147.244.187
-
also note: http://www.wikilogs.com/cgi-bin/moin.cgi doesn't work, even though it's the same IP#.
- copy
moin.cgi
to `/www/wikilogs/cgi-bin/ and it works
- copy
-
-
-
work on WSGI version now
-
copy
/wiki/server/test.wsgi
to/www/cgi-bin/
, dopython test.wsgi
from command-line and get back some HTML that says it was successful -
copy
/wiki/server/moin.wsgi
to same place, tweak path in file, hit http://www.teamflux.com/cgi-bin/moin.wsgi - it directs me toSUEXEC error_log
which saysfile has no execute permission: (/home1/teamflux/public_html/cgi-bin/moin.wsgi)
- butls -l
show permissions-rwxr-xr-x
(755, right?) so what's up? -
emailed Blue Host: ref KNN-77246-711.
-
for kicks decided to try FCGI
-
copied
moin.fcgi
over, edited path variable -
<http://www.teamflux.com/cgi-bin/moin.fcgi/Bill Seitz> works
-
-
Apr20
-
deploy to Blue Host: stick with FCGI.
-
aim for correct host info
-
edit local
/etc/hosts
file to point webseitz.fluxent.com at 66.147.244.187 -
hit http://webseitz.fluxent.com - get error page saying no web server registered at that address
-
make
webseitz
subdirectory of/www/fluxent/
but still get same result. -
contact Blue Host support to ask question: ref HRW-53541-269. They say each domain should have its own .htaccess file, and in that file is where you handle subdomains/hosts (presumably with rewrite rules).
-
make htaccess file, ugh Rewrite Cond fun. Still not working!
-
duh! On the main C Panel page find "subdomains" - you register the hostname there! Now get home page fine.
-
-
get raw MoinMoin url working
-
make
..../fluxent/webseitz/cgi-bin/
and copymoin.fcgi
into it -
<http://webseitz.fluxent.com/cgi-bin/moin.fcgi/Bill Seitz> now gives page. Though ugly because static items are like http://webseitz.fluxent.com/moin_static192/modernized/css/common.css - should I make another Symbolic Link, or use a Rewrite Rule? And what about my plans to have multiple spaces?
-
make symbolic link, now page is pretty.
-
-
next steps
-
get 2 spaces working -
/wiki/
and/diary/
with different pages for each -
get Rewrite Rule working to match old url http://webseitz.fluxent.com/wiki
-
put in my custom code
-
-
setting up WikiFarm spaces
-
key doc appears to be <http://moinmo.in/HowTo/Ubuntu Farm>
-
set up multiple directories per <http://moinmo.in/HowTo/Ubuntu Farm#Wikifarm_creation>
-
make existing
/wiki/
directory the storage for all, so there will be/wiki/wiki/
and/wiki/diary/
(and/wiki/master/
to clone, per instrux) -
what the heck, can't seem to use
sudo
all the sudden!-
make ticket ref YEB-76247-728
-
hmm, maybe not necessary, was getting problem because of where I was trying to create directories, maybe not issue
-
-
will have single
moin.fcgi
shared across all - should I put in/fluxent/cgi-bin/
,/fluxent/webseitz/cgi-bin/
,/teamflux/cgi-bin/
or somewhere else?- going to go with
/teamflux/cgi-bin/
so can have spaces served for different domains, just in case. (But each would need a unique name, can't re-usewiki
...)
- going to go with
-
cp static directory to
/wiki/static/
- assumes will be shared across all spaces. Is that good? Going that way for now. (Also note that means my CSS changes need to be put in that new location.) -
FCGI location/mapping - per <http://moinmo.in/HowTo/Ubuntu Farm#Web_server_configuration>
-
delete
/fluxent/.htaccess
(after making copy for backup). Static pages still load fine. -
now have to go back to mod_rewrite stuff for mapping the FCGI files - because the instructions here involve editing apache.conf which I don't have access to.
-
add FCGI Apache Handler from C Panel
-
edit .htaccess file
-
fail to get page - log says File does not exist: /home1/teamflux/public_html/fluxent/webseitz/wiki
-
haven't been exactly copying the examples I've found, because they don't handle the exact specs. But let's try to get something working, so let's start there. And focus on just getting <http://www.teamflux.com/wiki/Bill Seitz> to respond.
-
wait this is stupid. I should not worry about the URL prettiness yet, and just get the multi farm spaces working with any ugly URL first. Because if something is broken in that config, my error msgs might not be obvious.
-
-
So tweak farm config files per <http://moinmo.in/HowTo/Ubuntu Farm#Wikifarm_configuration>
-
farmconfig.py
-
wiki.py and diary.py
-
-
does it work? <http://www.teamflux.com/cgi-bin/moin.fcgi/wiki/Bill Seitz> gives "No wiki configuration matching the URL found!" - that seems slightly promising....
-
error log says
using farm config: /home/teamflux/moin-1.9.2/wiki/farmconfig.pyc
-
hmm, thinking mapping in farmconfig.py isn't correct given our lack of Rewrite happening. So tweak that. No dice.
-
decide I don't think that mapping looks quite like the sample, so tweak again to
("wiki", r"^http://www\.teamflux\.com/cgi-bin/moin\.fcgi/wiki/.*$")
-
now get real response page. No content. Ugly.
-
good:
using wiki config: /home/teamflux/moin-1.9.2/wiki/wiki.pyc
-
no content (even "make new page") because:
The page "MissingPage" could not be found. Check your underlay directory setting.
-
ah, realize both directories not correct. Tweak. But same result.
-
Look in
/underlay/pages/
- FindBad Content
but noMissingPage
. Hmm, but there is aMissingPage
in/data/pages/
!
-
-
ugly per error log:
File does not exist: /home1/teamflux/public_html/wiki
-
-
-
hmm, could all this be a permissions issue? Everything has owner=teamflux instead of owner=wiki. But can't do
sudo
. Ugh. use FTP webapp to change permissions (but it doesn't do recursive!).- can't seem to get away from that `MissingPage could not be found" error.
-
-
Apr30
-
MoinMoin deploy - farm, then Rewrite Rule
-
maybe it's the
data_dir
path. Look through code to see how it's actually used-
see it joined ('os.path.join
- which means having ending
/` is irrelevant, since that function handles it) with various suffixes, but never with a prefix. So having absolute seems appropriate. -
see a case where it's joined with suffix
pages
- which just confirms more that we have the right value.
-
-
really want the darn logging to work.
-
look in
/wiki/wiki/event-log
(which has mod_time of today), see entries forpagename=wiki%2FBill Seitz
- could it be mis-handling the space name and be trying to use the SubWiki concept?- change
moin.fcgi
where setsfix_script_name
- change from/wiki
toNone
. Still have same thing in theevent-log
- change
-
there's supposed to be a
/data/error.log
but I haven't seen it. -
look at sample
logfile
which is a config file. See that it wants to write in/tmp/moin.log
. Go find that. Discover last write as Apr27. At some point I mucked with the logging setting. Want to revert that.... -
ah, this is in
moin.fcgi
. Comment out line, but it doesn't help (still no change to/tmp/moin.log'). Error log still says
WARNING MoinMoin.log:139 using logging configuration read from built-in fallback in MoinMoin.log module!` - which it has been saying at least recently. -
edit
moin.fcgi
again, but use absolute path to/wiki/config/logging/logfile
file. Yes, now file is getting written to!. -
check code, know to use
logging.info(msg)
-
-
find something to log
-
Page.get_body()
callsf = codecs.open(self._text_filename()...)
and thentext = f.read()
-
def _text_filename()
says it returns complete filename, including path. That sounds good. -
add log line, hit
Bill Seitz
, log says/home/teamflux/moin-1.9.2/wiki/wiki/data/pages/wiki(2f)Bill Seitz/revisions/99999999
-
so, yes, that
wiki(2f)Bill Seitz
bit means it's looking for a Sub Page. -
and it I try to hit <http://www.teamflux.com/cgi-bin/moin.fcgi/diary/Bill Seitz> (note that's in
diary') the log shows
/home/teamflux/moin-1.9.2/wiki/wiki/data/pages/diary(2f)Bill Seitz/revisions/99999999` - so that URL path isn't getting mapped to the right space, but the spacename is getting kept in some other place.- I had hacked
farmconfig.py
where it maps URL to space, to get everything to map towiki
because the other stuff wasn't working. Maybe I need to fix that now...
- I had hacked
-
-
let's fix mapping of URL
-
again, mapping-list is in
farmconfig.py
-
/MoinMoin/web/contexts.py/def cfg()
includescfg = multiconfig.getConfig(self.request.url)
- log self.request.url there. Log shows what you'd expect. -
delete the catch-all RegExp I had before that made everything map to
wiki
. Make thediary
case consistent with thewiki
case - had some wildcard variation before. -
Now:
-
same Sub Page result as we've been getting for
wiki
-
for
diary
now getdata_dir "/home/teamflux/moin-1.9.2/diary/data" does not exist, or has incorrect ownership or permissions.
Cool, so we should be able to get back to that. Oh, wait, that's not a permissions problem: that's the wrong path. Editeddiary.py
paths. Now get same page as forwiki
. And the log shows/home/teamflux/moin-1.9.2/wiki/diary/data/pages/diary(2f)Bill Seitz/revisions/99999999
-
So now we have the right paths in both cases, but the space-name is also being passed like a Sub Page.
-
log
page_name
inPage.__init__()
- yes, page_name iswiki/Bill Seitz
-
where does this happen?
-
Maybe in
/MoinMoin/wsgiapp.py
dispatch()
- pagename comes out aswiki/Bill Seitz
-
path is
/wiki/Bill Seitz
based onpath = remove_prefix(request.path, cfg.url_prefix_action)
- url_prefix_action is something you set in a space config if it's publicly accessible, to limit what Web Crawler-s can hit. (You have to also edit robots.txt to make this work.) So I should set this in
wiki.py
- but it's irrelevant to the issue here. <http://moinmo.in/Help On Configuration#urls> http://hg.moinmo.in/moin/1.6/raw-file/1.6.3/docs/CHANGES
- url_prefix_action is something you set in a space config if it's publicly accessible, to limit what Web Crawler-s can hit. (You have to also edit robots.txt to make this work.) So I should set this in
-
request.path =
/wiki/Bill Seitz
, url_prefix_action = None -
where!!!!????
-
-
-
-
May6: Gloria W says she might be able to help this weekend, so I send her info.
May8:
-
email person who wrote the doc on the doing farm setup.
-
she wonders if it's related to the
wikis
map of URLs to spaces. Doesn't smell like it (since we're successfully mapping to the correct space), but who knows? -
where is
wikis
used?-
multiconfig.py - _url_re_list()
reads, compiles each regex and loads into cache. Ends withreturn _url_re_cache
-
that is called by
_getConfigName(url)
which just returns the space name (makes no change to path). Which is called bygetConfig()
which returns the config for the matching space (e.g. calling `diary.py')-
note that
multiconfig.getName()
is called in contexts- which is called in
wsgiapp.init()
- which is called in
-
-
-
-
so what does real solution smell like?
- I'm leaning toward changing wsgiapp.py to change pagename. Or request.path? Nah, let's just try pagename for now.
Jun02-08:
-
look carefully at how to strip space from pagename....
-
log cfg inside wsgiapp.py (use `cfg.dict') to get instance as dictionary
-
only "plain" attribute is
siteid
. Yep, that's the one. -
test pagename for starting with siteid, strip - in wsgiapp
-
yes, it works!
- still ugly because paths to CSS/etc still wrong...
-
-
ugh, links don't include spacename path!
-
view-source shows links like
/cgi-bin/moin.fcgi/RecentChanges
-
navi_bar is rendered in
theme/__init__.py
insplitNavilink()
aslink = page.link_to(request, title)
-
Page.link_to()
callsPage.url()
, thenlink_to_raw()
which seems to userelative=True
butPage.url()
usesrelative=False
as default argument noting that this "changed in 1.7, in 1.6, the default was True." Can't find reasoning or related ticket or anything....link_to_raw()
callswikiutil.link_tag()
-
change the default arg in
Page.url()
- no apparent effect. -
even add logging to
url()
and confirm that relative=True -
log url, and it's just the pagename - so where's the path being added????
-
-
Gloria W did some work late at night? No, but did some later...
-
get CSS now
-
<http://www.teamflux.com/wiki/Bill Seitz> gives you correct page, but
-
all links in page point to
/cgi-bin/moin.fcgi
-
none of those links include
/wiki/
NameSpace
-
-
-
revert to local server (not using farm yet)
-
all links just look like
/Bill Seitz
(and `/Bill Seitz?action=....')- hmm, that leading
/
could cause issues with namespaces, but first, why this big difference in rendering?
- hmm, that leading
-
-
hmm, could it be the cached version of the page is wrong?
- nope, dumped cache and got same result
-
maybe
wikiutil.link_tag()
- this gets called byPage.link_to_raw()
(noted above)-
see refs to
request.script_root
-
yep, that's the place!
-
-
change `request.script_root'?
-
is it safe?
-
do we change it to
/wiki
or do we change it to `` and then prepend the/wiki
to the pagename?- inclined to do it to the former, as that seems the right scope
-
check on local server - there, script_root is empty, so param[params]='/Seitz Kim` or whatever.
- that params =
Page.link_to_raw.url
- that params =
-
how change it? "name
request
is not defined" inwiki.py
-
-
how about changing url in `link_tag()'? - argh, don't have cfg.siteid in there.
-
could re-calc siteid from request.PATH_INFO
-
first test back hacking static value - good!
-
try to substring
request.PATH_INFO
, get "AttributeError:AllContext
object has no attribute `PATH_INFO'"-
have to use
request.environ["PATH_INFO"]
- now it works -
"Info" (list of versions of page) works
-
Edit link (http://www.teamflux.com/wiki/Bill Seitz?action=edit&editor=text) works, give form, but when you save it redirects to
/cgi-bin/moin.fcgi/Bill Seitz
- and change was not saved! (Form does HttpPost to the same wrong url.) -
RecentChanges gives me a create-new-page link!
-
-
Jun17
continuing farm issues
-
doh! Realize that I never put any of my custom code onto Blue Host, because I just wanted to get normal stuff running. Is it time to change that? Or do I keep pushing generic/farm version further along? Note that I have 2 different sets of forks going (farm-related changes on server, my custom code local) without any integration. (Which also means no tracking or backup of changes on server.) Also note that I don't have Apache/flup set up on my MacBook.
- Decide: get farm working locally, even if with built-in server instead of Apache/flup. (Commit.) Then get server tweaks replicated locally. (Commit.) Then get all local changes installed on server. Then move server forward (in synch with local). Note this assumes I can keep things pretty well in synch without getting into Apache/flup.
Jun18
get farm working locally
-
move away wikiconfig.py move up farmconfig.py and copy mywiki.py to wiki.py and diary.py - also copy data directory to
wiki
anddiary
but keep singleunderlay
- copy some directory lines of code form wikiconfig.py to wiki.py and diary.py
-
server now works, but links don't go to namespace
- fix
wikiutil.link_tag()
- now good, plus RecentChanges works
- fix
-
try edit - get form, but submit doesn't work because doesn't go to namespace.
- tweak Page Editor.py and Graphical Page Editor.py - edit now saves
-
Find Page
has same problem - form posts wrong place-
tweak
Advanced Search.py
- that works for the Advanced Search -
simple
Find Page
form is intheme/__init__.searchform()
-
but is this Whack A Mole? Maybe need to change at different level. Also note have this same issue on Blue Host.
-
after
Page.link_to_url()
callsPage.url()
, url =Find Page
- not finding issue in there -
hmm
searchform()
just calls a dictionary item-
I see url set elsewhere in there to self.request.href(d['page'].page_name)
-
logging shows me that the arg passed there is the straight pagename, so it's the
request.href()
that's creating the issue. -
there are 32 calls to
request.href()
scattered through the code. -
code for
request.href()
probably the same as inMoinMoin/support/werkzeug/utils class Href
- seems like main value is in generating queryargs as part of url. (werkzeug) -
but it seems like in most of those calls, that feature isn't even used - just passing a pagename!
-
should I just turn these into relative URLs? (and what about the cases like
link_tag()
where I've purposely made absolutes to include the NameSpace? should I be consistent?)
-
-
-
Jun23
-
continue
request.href()
investigation-
start at
wikiserver.py
and step forward -
in
/MoinMoin/web/request.py
findclass Request(Response Base, Request Base):
which contains self.href = Href(self.script_root or/
, self.charset) - and up at top ofrequest.py
findfrom werkzeug import Href
-
so in theory I should just change the arg in that
self.href = Href()
call! -
did that, set first arg='` and now it works! (Note that this isn't supporting Sub Page feature. Actually, maybe it does if parent/Sub Page gets passed as the pagename...)
-
so now have these are relative URL-s - should I change the other links? Yes.
-
undo all (?) of my previous path changes.
-
things look good in quick inspection
-
Back Links broken - looks like it's searching a huge space. Will come back to that.
-
editing works!
-
"info" works http://localhost:8080/wiki/WikiLog?action=info
-
simple Title search is going to root.
-
no, it's not that. A typical Title search works fine.
-
but if a single page is returned, it does a redirect to that matching page, but with no path.
-
fullsearch.py does
request.http_redirect(url)
withurl=Bill Seitz
- which callscontexts.http_redirect()
with same url. -
LiveHttpHeaders in FireFox says it's getting the full/absolute URL <http://localhost:8080/Bill Seitz> passed to it with the HttpStatus=302.
-
-
change
context.http_redirect()
to pass url that includes Space Name. Works!
-
-
regular fulltext search (from header) works! Also from Find Page!
-
but Back Links does that infinite search still!
-
fulltext search url is <http://localhost:8080/wiki/Bill Seitz?action=fullsearch&context=180&value=Bill Seitz&fullsearch=Text> - gets turned into
query=["Bill Seitz"]
-
Back Link url is <http://localhost:8080/wiki/Bill Seitz?action=fullsearch&context=180&value=linkto%3A%22Bill Seitz%22> - turns into
query=[linkto:"Bill Seitz"]
-
that gets passed to
MoinMoin.search.builtin.MoinSearch()
-
hmm, could it be failing to get any matches, and trying to prepare list of entire site???
-
just let that query run (I kept killing it before) - it took 6min, then redirected me to just 1 match! (There should be 3!) (No wait, there should only be 1, because of the other 2, 1 was the same page itself, and another had the string in a link href.) Going to edit another page to trigger a match, and see what changes.
-
Actually, 1st going to make sure to get rid of some logging, as that might have been part of the weirdness too. Hah, now got that matching page pretty instantly!
-
now edit a different page to add to matches for string, do Back Links again - get list of matching pages very quickly! (Very weird before...) (By the way, note that all this searching is happening over 1874 pages! I'm guessing that's the underlay.)
-
-
-
-
get code onto server
-
download File Zilla as FTP client, look online for config info, connect, copy over changed .py files. (Hrm, was I sloppy about that? Did I over-write too glibly?)
-
realize I never did full setup/config (e.g. Language Setup), which is probably why things like
/
and RecentChanges don't work. Start config...-
realize I still have space-specific
underlay
directories, which I don't think I want. Changewiki.py
anddiary.py
to change them both. (Change back below!) -
/wiki/
config-
create
Bill Seitz
account; changewiki.py
to add superuser -
install English-allpages
-
set
Front Page
- hmm, it still doesn't work, says it isn't there (though at least you get the correct pretty template for add-new-page now)
-
RecentChanges works now, showing standard/default version
-
add a wikilog bit http://www.teamflux.com/wiki/z2009-11-18-HirshNewContentBusinessModels
-
now appears in RecentChanges
-
hmm, just noticed that my local RecentChanges is the classic/default look, too, though Front Page has the WikiLog look. Decide I'm going to keep it that way on purpose, because it does have certain benefits (version links, etc.).
-
urgh, haven't done anything yet, but now get Front Page that has that MoinMoin default set of links.
-
hmm, do I want to change in each space, or in underlay?
-
argh, just realize that in installing English-allpages before, it did that just within
/wiki/underlay/
not up in shared/underlay/
. Argh, do I try to change that? How?-
The Ubuntu Farm page I was following has separate underlay for each space.
-
weird, seems like
wiki.py
had its underlay setting changed automatically! (Thediary.py
hasn't changed since haven't installed there, will just leave it for now.)
-
-
so going to change Front Page within
/wiki/
-
copy/paste from local Text Editor to server web interface - but when try save get "You can't change ACLs on this page since you have no admin rights on it!". Weird.
-
take out the acl line and save again, successfully. Though not very pretty yet. Fill in all those sidebar pages, looks a little more normal now.
-
but can still edit Front Page, which I don't want. To Do!
-
-
-
-
-
realize that rendering isn't working right.
- ah, missing
/MoinMoin/parser/text_to_wiki.py
- copied local to server, now good.
- ah, missing
-
general testing
-
argh, Title search still posts to
moin.fcgi
. Doh, lots of links still do!- hmm, now it seems like they don't. Very confused. But seems ok at the moment...
-
notice that D And D is not getting recognized as WikiWord. Hmm, probably a cache issue - once I edited the page, it linked properly.
-
ugh, hitting http://www.teamflux.com/wiki doesn't give me Front Page. Definitely have this set in wiki.py. Checked with log, it's being set.
- doesn't work on local server either!
-
-
-
ok, it's clear that the Front Page issue revolves around parsing Space Name (siteid) vs Page Name inside wsgiapp.py
-
for
/wiki/Bill Seitz
getpath=/wiki/Bill Seitz, normalized pagename=wiki/Bill Seitz, cfg.siteid=wiki, space_prefix=wiki/, net pagename=Bill Seitz
-
for
/wiki/
get `wsgiapp path: /wiki/, normalized pagename wiki, cfg.siteid wiki, space_prefix: wiki/, net pagename: wiki -
for
/wiki
get nothing (is that Apache issue?) -
look inside
wikutil.normalize_pagename()
-
list of pages by splitting path on hyphen (why not making use of siteid?):
-
for
/wiki/Bill Seitz': {{{[u'
, u'wiki`, u'Bill Seitz']}}} -
for
/wiki/': {{{[u'
, u'wiki`, u'']}}}
-
-
-
going to move this logic from wsgiapp into
wikutil.normalize_pagename()
- yep that works for/wiki/Bill Seitz
and/wiki/
-
does not work for
/wiki
- even locally which means it's not an Apache issue!-
it's not even getting to
wsgiapp.dispatch()
which is where we've been working. -
not getting to
wsgiapp.run()
-
(set log level in
/moin192/wikiserverlogging.conf
- just set to DEBUG) -
at DEBUG level don't get anything logged for
/wiki
exceptMoinMoin.web.contexts
-
ah, see that it's the config mapping in config.multiconfig - when pass
/wiki
it's usingfarmconfig.pyc
instead ofwiki.pyc
-
add another entry to
wikis
dictionary - now at least getting called through more stuff. -
problem now is in
wikiutil.normalize_pagename()
because of myspace_prefix
having both slashes. -
fixed
normalize_pagename()
- now works!
-
-
-
Jun29-Jul01
-
next: copy back to server, confirm working consistently...
-
done!
-
also setup Super User and Language Setup in
/diary/
- now get good Bill Seitz and RecentChanges there, though Front Page not changed, and these all require the ugly moin.fcgi URL. -
updated Gloria W.
-
-
she changed Apache Redirect Rule to redirect everything to MoinMoin, which is clearly what I don't want. It seemed to work ok (ignoring need for static home pages) at first, then got weird (e.g. CSS/images don't work). Did I break something, or was browser caching hiding the breakage?
Jul06
-
frustrated: take a step back, what are my options?
-
spend some time trying to get Apache working, even if only for myself
- maybe focus on putting everything at webseitz.fluxent.com - ok this is a decent plan
-
rebuild from scratch on Django, Blue Host (and Gloria W) kinda used to that.
-
rebuild from scratch on PikiPiki - have to host self, or rent dedicated hosting to avoid Apache mess? (Use something lighter like Robaccia?)
-
get MoinMoin working at Blue Host with something other than Apache.
-
-
play with Apache: start by getting work at webseitz.fluxent.com without pretty URL
-
rename
www/.htaccess
- get static root -
rename
www/fluxent/webseitz/.htaccess
- get static root -
try <http://webseitz.fluxent.com/cgi-bin/moin.fcgi/wiki/Bill Seitz> - server error
-
put back .htaccess that has just fcgi mapping - still no good.
-
remove symlink to moin.fcgi, make a copy - still problems
-
after much mucking, realize that server errors call 500.php so need PHP map in .htaccess
-
fix path to upgraded Python at top of moin.fcgi -
#!/home/teamflux/python/2.6.5/bin/python
-
now get working page at <http://webseitz.fluxent.com/cgi-bin/moin.fcgi/wiki/Bill Seitz>
-
links to CSS etc in source look like http://webseitz.fluxent.com/moin_static192/modern/css/common.css - which fail
-
hmm, there's an .htaccess within
/moin_static192/
- edit file to take Rewrite Rule-s out. Now get pretty page at <http://webseitz.fluxent.com/cgi-bin/moin.fcgi/wiki/Bill Seitz> - and diary <http://webseitz.fluxent.com/cgi-bin/moin.fcgi/diary/Bill Seitz> -
make
/wiki/'-specific Rewrite Rule - works! Did same for
/diary/` - works! -
tweak rules to handle
/wiki
and/diary
URLs - note this means can't have an URL like/wikipage.html
. But it works (get correct Front Page body).
-
-
Whoops, found issue - if I hit http://webseitz.fluxent.com/wiki I get a nice-looking page, but none of the links are within
/wiki/
- have same problem on my local server, just didn't catch it....-
look at
wikiutil.normalize_pagename()
again-
hitting http://localhost:8080/wiki/ you get lots of cases like
normalize_pagename returning name My Blog Roll
-
hitting http://localhost:8080/wiki you get the same
-
I don't think this is the source of the issue
-
-
is it `request.href()'?
-
more investigation - are all the links wrong?
- Yes, each is treated as a relative reference, but because that's relative to
/wiki
they don't get treated as relative to/wiki/
.
- Yes, each is treated as a relative reference, but because that's relative to
-
hmm, maybe I'm taking the wrong approach! Is the better approach to have
/wiki
redirect (302) to `/wiki/'? That seems pretty common, like if you hit http://www.sippey.com/page/2-
If that makes sense, should it be done in Apache Rewrite Rule? That seems most sensible to me.
-
On closer look, think 301 status is better to return. Either value can be done with Apache http://en.wikipedia.org/wiki/URL_redirection#Using_.htaccess_for_redirection
-
yep, changed .htaccess to do 301, and it works great!
-
-
Jul08
-
what next?
-
make diary private
-
right now can edit without logging in
-
have Bill Seitz account set as superuser in
diary.py
-
add to
diary.py
acl_rights_before = u"Bill Seitz:read,write,admin,delete,revert All:none"
-
works - you get
You are not allowed to view this page.
-
weirdly, you can still create an account for yourself, it just doesn't do any good.
-
tried <http://moinmo.in/FeatureRequests/Disable User Creation#Solution_for_1.9> but couldn't get it to work
-
not going to worry about it for now
-
-
-
side note - because won't have Visible Backlinks at first, will manually generate a Site Map for Google to crawl
-
http://en.wikipedia.org/wiki/Sitemaps
-
can be pure-text list
-
must be in same directory (path?) as pages?
-
-
http://www.google.com/support/webmasters/bin/topic.py?topic=8476
- yes can have text file; no mention of same-directory requirement
-
hmm, can Title Index and variants do the job? Meatball Wiki:AllPages
-
-
finish content conversion - of public WikiLog
-
current file counts
-
13,537 scraped files (should I rename some?)
-
6346 already converted - only did sample of z files
-
7099 obvious skips - but that only gets to 13,445
-
-
drag skipped files, run converter, yes get to 13,445
-
let's fix misnamed files first (e.g. Z2008.....)
-
there's just 5 of them
-
not going to worry about possibility of links to these
-
some weird dates, do my best to get correct one
-
convert
-
now 13,444 converted files - that's 93 difference
-
-
next: write script to made file listing each of input/output directories (actually just used
ls
for that), then script to compare-
found 99!
-
tested the other direction, found 6 (those were basically capitalization issues, must have handled those after generating conversion files before)
-
-
just dump all converted content and run again
- now have 13,537 output - yeah!
-
-
try creating new pages with script
-
write 1st version taking http://www.voidspace.org.uk/python/articles/urllib2.shtml#data as sample
-
at
response = urllib2.urlopen(req)
get errorHTTP Error 404: NOT FOUND
-
manually add a couple pages while not logged in, to make sure there's no issue there.
-
check with FireFox LiveHttpHeaders - action=edit&rev=0&ticket=004c367ded.934c0ab8ce9067ba6df9a0c692eb27de31b9523f&button_save=Save+Changes&editor=text&savetext=test%0D%0A&comment=&category=
-
even try adding that ticket value, still no good.
-
probably need to tie ticket to page name. Start to create page manually, grab the hidden ticket, stick that into code to auto-add that same pagename. Yes it works! (Though there's a weird "None" at the bottom - will have to check that more carefully when do next page.)
-
so need fancier process - this is probably also good to verify that page doesn't already exist. Actually, we'll just over-write if it already exists - the form is identical.
- ok, that's working
-
figure out that "None" bit at end of page.
- took out the extra post params that had value set to
None
- that did it.
- took out the extra post params that had value set to
-
-
start actually posting
-
argh got stopped by
Surge protection
- add line inwiki.py
to turn off protection temporarily. -
start doing z pages up through z2009-10-30
-
dies on 2004-03-22-AndreessenAmericanStrenghts - ah, it was an empty file because there was a new one created that correct that mis-spelling at the end - removed bad one from list (and deleted empty file), re-ran, it went through fine.
-
died again on 2009-09-14-HeifermanSocialStimulus - another 0-length page.
-
finish this batch 11:25
-
RecentPages shows them all! So does Front Page!
-
trying to tweak
WikiLog.py
to limit page size. Not easy here. -
maybe empty log after add everything? Because this is such a special case?
-
-
-
So add non-z pages - then empty log, then add most-recent-z pages that were left to end. Then check Front Page. Then set ACL.
-
all done adding!
-
Front Page now loads normally (hmm, no way to get to old stuff, will need an AllPages link somewhere), except the right column is down below, instead of on the right.
-
add ACL. Test to confirm it works.
-
-
-
next: DNS update. Done Sat Jul10'2010 ~08:00
Jul10-13
-
argh note that when I re-generated converted pages I forgot to re-fix tags
'<hr><b></b><br>'
- have to figure out how to handle those later. To Do! -
Front Page still looks weird
-
take out any attempts at setting limits in length from yesterday
-
run against validator http://validator.w3.org - get 102 errors. Try with my local version and get none. So something not matching. Suspect it's some header code I'm missing.
-
start with Bill Seitz page first
- only warnings from validator, and 2 pages look the same
-
now Front Page
-
"Set bookmark" has a line of
tr
that's not within atable
.-
Find that it's the 3rd case of
Set bookmark
inWikiLog.py
-
Can't figure out why this line doesn't appear in local, since code is the same. Should I just comment it all out? Or wrap it in a table?
-
after that still have the
daybreak
line, just without the bookmark. So make some more tweaks. Now Front Page looks right - right sidebar and all! And the validator accepts it!
-
-
-
-
set up site in http://www.google.com/webmasters/tools
-
verified my ownership (by uploading a static file they generated)
-
wrote script to generate
sitemap.txt
, uploaded to site, submitted to Google
-
-
posted entry saying we're back up. But am I ready to promote for real?
-
yikes, realize I don't have Expanding Wiki Words (or titles)
-
find
Page.split_title()
, change its default argument toforce=1
- that seemed to work for title tag but nothing else. -
hmm, that gets called in
Page.link_to()
but only if no inner text passed, so need to find where that gets called. -
it's in
text_html.pagelink()
-
make
wikiutil.expandWikiWord()
that takes meat fromPage.split_title()
, call it fromPage.link_to()
-
at this point it's working for everything in headers and footer, but not the body of the page.
-
for the header/footer bits, you see
link_to()
returning a full tagset together, but in the body bits,link_to()
gets called separately for the open-a-href and the close-a, but not for the text label! -
what calls
link_to()
?formatter.text_html.pagelink()
- but that only gets called twice also! What calls that?-
text_moin_wiki._word_repl()
? which callsformatter.text_html.text()
for label text? Nope, that doesn't get called here. (Try to catch all calls oftext()
but only seems to get called for username.) -
text_moin_wiki._link_repl()
? Nope -
ugh can't find anything!
-
even turned on the user pref for "put spaces in wiki words" but that didn't seem to change anything either.
-
hmm could this be a cache issue? Edited/saved page, now see some calls to
_word_repl()
and_link_repl()
! -
that was it! Added call in
_word_repl()
- now good! -
have to copy to server now (already removed logging lines and re-ran; also checked that it worked on Front Page, etc.)
-
also note that expanding means I'm not happy with having links that aren't underlined. Need to tweak CSS to use subtle underlining. (And consider that special icon for external links - maybe a smaller icon?) - To Do!
-
-
-
-
just realized I've lost my RSS feed! (There's one linked on RecentChanges, but it doesn't give WikiLog content, just titles and comments.)
-
ah, hadn't copied that code over to server. Now working, though still only linked at RecentChanges.
-
tweaked
/theme/__init__.shouldUseRSS()
and now RSS link tag is in header of Front Page. -
create
macro/Rss Icon.py
and call it from Front Page, so icon/link displays there at bottom of right column.
-
Aug19'2010: create a small TeamWiki space.
-
restating the process, from the Ubuntu farm guide
-
copy
/master/
andmaster.py
to appropriate names -
edit copy of
master.py
to include correct directories, Super User, ACL, etc. -
edit
farmconfig.py
for paths; likewise.htaccess
-
go to Language Setup page, go install all system pages.
-
Jun05'2011: get around to fixing RSS feed
-
Somewhere along the way I realized that the problem with the feed is that the links have been missing a
/
between the hostname/port and the path. -
Looking inside
rss_rc.py
. Looks like issue is wherefull_url()
callscontexts.getQualifiedURL()
which does anrstrip('/')
- why? -
Just stick the slash back in... (no, not bothering to use
url_join()
) -
have nice slash now, but realize that URL is just using hostname without rest of path (which denotes the WikiSpace).
http://webseitz.fluxent.com/z2011-06-04-WilsonNetCulturalRevolution?action=diff
-
is
wrappers.host_url()
default of callingget_current_url()
withhost_only=True
the issue? Changing it.-
ugh, that turns every link in the feed to a long url like
http://webseitz.fluxent.com/cgi-bin/moin.fcgi/wiki/RecentChanges?action=rss_rc&unique=1&ddiffs=1/InterWikiMap#20110603125329
-
change that call to
get_current_url()
to passroot_only=True
- get same super-long url - should I put something in config file? -
yes, just added
url_root
in config file for each space, then called that fromrss_rc.full_url()
, so hack is local.
-
-
going to leave it this way for now.
Aug02'2011 - add DisQus Blog Thread to each page - WikiWeb Dialogue
- looks like I can just put in the minimal config variable in the JavaScript, and let them use defaults for the dynamic values.
- think I'll put the JavaScript call inside
__init__.pageinfo()
- won't want this to appear on a variety of pages like Front Page - ah, find
shouldShowPageinfo()
function right above it. Add bit to return False for Front Page.
May'2012: revisiting Mercurial because of Webpy For Simplest Thing
- try easy way:
hg commit
-abort: cannot commit over an applied mq patch
- The warning is when you try to commit in your main repository while you have qpush'ed some patches. If you want to commit the state of your patch queue to your patch queue repository, use
hg qcommit
. - try
hg qcommit
-nothing changed
- The warning is when you try to commit in your main repository while you have qpush'ed some patches. If you want to commit the state of your patch queue to your patch queue repository, use
- do
hg status
- get about 20 files each of statusM, !, ?
- do
hg qrefresh && hg qcommit
-nothing changed
- do
hg add
, bunch files get added; thenhg qrefresh && hg qcommit
-nothing changed
. But nowhg status
gives nothing.
Edited: | Tweet this! | Search Twitter for discussion