Blender Verse - Teaching edition

I know it’s a crazy idea, but I would like to see a version of blender verse, that was like a nasa control room,

students at secondary councils, and a “host teacher” that can bring up stuff on a big screen, circle it, type next to it etc,

what is on the sub consoles ? screencast of there git or svn or?

Basically, a place to hang out, where people learn to code together, independently and with the group at the same time,

like second life for serious coders.

This indeed has some interesting possibilities. A synchronised environment where the class is all hooked into the one Blender environment, perhaps each student & the teacher having their own “scene”. Of course, a simple “verse” style environment won’t help much because it simply sends the changes to the mesh as opposed to HOW those changes were made (i.e. Press SHIFT+A, the ‘Select Armature’, etc).

On the other hand, I can see this tying into the proposed (though I’m not sure if it’s being worked on) idea of zBrush style recording and playback of actions in Blender. Provided it was coded with this in mind, there is the potential for not only playing back actions from a recorded file but the ability to play actions streamed over the network. Provided one could “lock” the interaction on one side or the other, one could even have that playback be bidirectional. Combined with information on what keys were pressed, mouse movements, etc - this could make for a tool that doesn’t even need to be in a physical classroom.

All that said, it would be a huge task, which is probably why we won’t see it until it becomes a focus of Ton or an educational institution with quite a bit of spare money/development effort to dedicate to the task.

I wonder if you could write the room to be htmL 5,
and the nodes to display content from a screencasting server,
and each client the sends input data to verse?

This seems like some project anyone with enough time/skills could undertake, without any need to be Blender centric.

Also not sure what the nasa-control-room model has over, regular on-line video conferencing/screen-sharing.

If your interested in dev collaboration, check http://www.pairprogramwith.me
(podcast interview http://thechangelog.com/pair-with-me/)

Some people record their sessions and publish them after - if its useful content.

Just that the teacher, has access to everyone’s screen, and can bring it up on the larger screen to teach with.

So, they could use a students problem as a example, to teach the rest of the class.

Just having the environment is not enough, you would also need classes.

7 pm EST - Coding with Doug

9 pm - EST - Uv texture painting with Jim

12 pm EST - Compiling blender with Ben

etc

I suppose the scheduling and infrastructure are just as important as the environment.

Screen-sharing works just fine for such things provided you have the bandwidth. In a physical classroom environment, this is trivial. Distributed classrooms make it a little less so (at least for the teacher being able to see the screens of all the students).

On the other hand, my idea relies on there being an fully implemented zBrush-style recording/playback interface and that would require being Blender-specific. Of course, my understanding is that whilst we’re closer to being able to implement such a record/playback feature - there are still quite a few areas that are pure native code from interface-to-result which could/would cause problems (i.e. no “hook” from which to record the user changes).

You could not make the undo/redo system just be infinite back to the beginning of the file, and somehow screen cast what happens before it does it?

Somehow save the undo/redo steps as a file?

That is possible but it would not be terribly informative as to HOW the user did it. What would happen is a mesh would show up and ust start changing. Can’t tell whether they were using the loop cut tool, knife tool, etc. Wouldn’t be able to see what keys to press to make that happen. And so on.

The cool thing about the zBrush demo mode is that it records EVERYTHING. Not just that the mesh got more vertices placed just so, but HOW that happened. If the user clicked on a menu, that’s recorded and shown. If the user used a keymap instead, recorded and shown. If the user sculpted with this brush, undid that change, sculpted again, undid it, then sculpted it another way - all recorded and shown. It is impressive and VERY useful to playback slowly.

Yeah, what about the ui python keeping a log,

so each keypress sends a message to a recorder?

also ui clicks send a different message?

like the bge object.sendMessage

this + timestamps = what you would want I think

resizing using mouse etc would need it’s own system to generate offsets over time (tricky)

edit: I guess you could use the change in scale, etc and then apply it over the distance of the timestamps? Distance/time?

It, of course, can be done. zBrush proves that. The issue is whether it is or isn’t important enough to the Blender Foundation to allow the kinds of invasive changes that would be required. This isn’t like an add-on or small UI patch, having Blender capable of full session record & playback will require large changes touching many areas of the UI.

Even without my feelings about the issue of changes & BFI vetos, I don’t think anyone can argue that a change that widespread, complex, and pervasive would have to be done by the BFI developers. There isn’t a snowflake’s chance in hell anyone is going to put in the effort required given even a 10% chance it’ll be rejected. I’m not even going to argue that the BFI veto shouldn’t apply to such a large change; which should indicate how big/invasive a task I think it to be.

actually, I think it would just he adding 1 line, to,each Ui element ?

like addlog, ui element name?

and a seperate system leveraging the shortcut code?

another one for mouse/ ui interactions?

I very much doubt it is that simple (otherwise it would have been done already). However, if you want, ask Campbell how much work he thinks would be required to implement zBrush level record & playback. He’s far more knowledgeable about the code base than I am. I suspect he’ll tell you it is a much larger task too, but I’d be happy to be wrong.

The devils in the details, any single feature may be simple, But to create an entire system which does something useful with event logging (so you are generating a stream of data someone else can make some sense of) is more work.

@BluePrintRandom, what you propose sounds like some research project which may be interesting but probably a time-sink and wouldn’t necessarily give usable outcomes (most new/interesting software worth writing has risks of failing - its fine).

Just that there are lots of interesting research/experimental projects for devs to work on.

For what it’s worth, there’s been some research (and preliminary code) by Jon Denning… at least as it pertains to modeling at least. Look up his meshflow presentation at last year’s Blender Conference (slides and video links are at the top of this page of his website). Granted, it’s still preliminary work and there’d be a lot more necessary to make it fully featured and useful in a general case… but it’s still worth a mention.