Non-Textual Media

Brion's initial notes
interesting bits during intros
 * (brion) on-wiki editing for visual types, interactive media too?
 * (andrew lih) - how do we break video into wiki-level objects, what's the structure? what's recent changes for video look like?
 * lots of interest in how to think about video as well as the tech side
 * (erik) -- updates on mdale's work -- it's getting close!

Other ideas...
 * wiki's external editing interface hasn't really taken off; can we do more of that with better integration like a firefox plugin?
 * inkscape, autocad
 * [mdale] I don't think external interface integrations are high priority. There are all sort of web based applications emerging like svg-edit 3dtin pixastic, universal subtitles etc. We should foster relationships with web-based editor applications encourage community around these tools. Give web edit apps high viability and give feedback to the developers for our use in collaborative content context.
 * 'but how do you diff?'
 * [jason cook] even with huge video things, we can do diffs and stuff from the structure data
 * [erik] mdale's video editor has SMIL & SMIL-like edit decision lists, etc; is diffable. one example view is to just run both versions side-by-side! (ui still needs to be prototyped but it's a known problem)
 * [mdale] yes I have a few ideas about how diffs could look, ie analyzing the xml, and seeing what temporally has changed, in the simple case adding or removing time from a clip or adding a clip could be represented as the pieces that "changed", its a relatively strait forward xml transformation to generate a valid "diff" smil xml that represents thouse changes.
 * related issues: changing codecs, funky standards. still a bit wonky; will need to migrate theora->webm etc. some browsers still h.264-only
 * google pushing webm should at least help move to a 'one or the other' standards world, which is better than where we were with only niche supporters on theora

Authoring limits, quality, net


 * [andrew lih] upload bandwidth -- esp in places like US where broadband actually sucks -- is not as good as it needs to be when working with hd video
 * both tech quality and aesthetics on video are much *harder* than photos. [brion] can we make good use of lower-"quality" material on other projects, in areas other than wikipedia proper?
 * We need transcoding support on the sever, got started on that a while back but been waiting for new resource loader to dive into an updated TimedMediaHandler release.
 * -> [jrbl] bootstrapping the video creation community. start small, make bigger. example: scratch (smalltalk thingy), create spiffy little interactive thingies and share them
 * http://scratch.mit.edu/ -> teaching video/media literacy to kids. teach better practices, how to take criticism and improve something


 * [kimo] example - nytimes presentation thingy w/ obama's texas speech: timeline, text, etc

Bringing the pieces together


 * commons as a media framework?
 * so far it's a *repository*... we need the tech to put the pieces *together*!


 * [andrew lih] It seems like there's nothing more powerful than 1987 Hypercard!
 * [brion] That's exactly what I want to do :D


 * [jason cook] we do already have some wiki object structure at the page level, can we use those same relations for media pieces?
 * [jrbl] getting the data actually accessible for big searches is still kinda new
 * [jc] -> semantic mediawiki kinda goes the other way, putting pieces in the other direction :D

Infrastructure


 * Stability and performance... can we survive a mega-hit?
 * [ryan] slashdot no problem ;)
 * (forget slashdotting -- think about the popedotting & michael jackson. we've seen bad bottlenecks and we know there'll be more)
 * [jc] photos, video, etc also need more server work when edits happen
 * [jc, brion] smil pushing rendering to client + really good CDN distribution can limit the bottlenecking of video changes
 * [jc] bandwidth gets cheaper as time & usage goes on! [yay]
 * [ryan] the actual large video files are still not ideal on present storage architecture -- but improvements are on the way
 * [jc] example of squid, varnish, lighty etc are great for small files but suck for huge files. akamai etc know it but they don't share the tech. but... more work coming
 * [mdale] Exploring new p2p distribution mechanisms. Both the torrent based p2p distribution mentioned in September and the latest stuff they are working on trakerless cloud based p2p media hash delivery network.

For the users...
 * [erik] templates work in mdale's sequencer: can help to share particular types of data
 * [erik] timed text support also in the sequencer: editable & templatable subtitles. citations and images work :D
 * ^ this is the start of mixed media -- awesomeeeee
 * [mdale] On my todo list is timed text transclution for sequence publishing. Ie every segment of a clip you have in the sequencer grabs the segmented timed text available, and composites that into sequence timed text. Would be nice to do this with templates, maybe some magic to add to TImedMediaHandler Extensions, but for now we can just copy the and duplicate the text segments.

Objectivity issues?
 * [emmanuel] video tends to be more emotionally affective than text; is this something we need to explicitly watch out for in the wikimedia situation where we have neutrality concerns?
 * [erik] note: it's not just the explicit bias, but also implicit biases by the author in every little decision. in a video situation, more of these tiny decisions accumulate faster than they do when you're writing text or making a photo. [high bandwidth data -> more chances for issues]. keep the element of collaboration in there very early on to help combat accidental bias drift
 * [steven sarr] we've built up conventions over the years for how we work with text; those will need to be built up for new media too. community will develop the skills!
 * [kimo] the popularity of reality tv -- which tends to also come with a lot of behind-the-scenes parts -- may actually help to train people to be MORE media-literate and bias-aware. This is good!
 * keeping edit history helps to reveal more of these things
 * [ward] more things will be filmed with multiple cameras -- filming the filmers as well. multi-perspective video is going to make more of those bits visible on the inside.
 * [kimo] cf how magicians make their tv shows believable: show an audience in the video so you have more 'witnesses' to build trust that the video wasn't mis-edited
 * [erik] example: metavid -- pulls HUGE library of video data from us government legislature floors, making it searchable so people can go through and see what was really said by whom when. makes the details of the past accessible
 * [ryan] -> can be hooked into all kinda of semantic data too, is really powerful concept!
 * [dvdptrs] a lot of times photos on commons lack that context data, and interpretations can be vague at times
 * rich data can be encouraged with upload template data: we can at least capture a lot of information in language. will be VERY important for video!

Mike's initial notes

 * Discussion of codecs, WebM wins.
 * Media literacy going up, skills don't exist will be non-issue going forward
 * Wikipedia already so popular, scaling for eg video going viral isn't a big deal.
 * SMIL is interesting cause moves rendering of bits to end user
 * [mdale] Note in practice we presently flatten the video files ( on end users computers in firefox ) because right now its hard to render SMIL in realtime in the present HTML5 video platforms.
 * wikimarkup, including templates used in translations etc super poweful
 * addressing POV requires community; cameras everywhere make multi-perspective video feasible
 * medavid was cool, lots more like it coming

Attendance

 * Brion
 * Mike Linksvayer
 * Fuzheado
 * (add yourself!)
 * Male ( added notes but did not attend the meeting )