Overlap

there is a cg undustry overlap with another industry I would like to expose and foster.

Real time markerless camera tracking with a common tablet could help children who speak sign language talk to people, by matching poses somehow with camera tracking and a database

Speech to text and text to speech could also be on the same device,

This is really really important for those of us who have children with autism.

Why blender?

There are good people here that are talanted coders who understand all the issues. I code a little python myself, but not BPY yet, just bge…

You’ve made this proposal before and the answer is still going to be the same.

Not only is Speech to Text and Text to Speech outside of Blender’s scope, it’s also not yet to the point of being foolproof and involves a technology that none of the existing Blender developers have experience coding for.

It’s almost like developing a feature that would allow Blender to process text documents like in the Microsoft Office suite (ie. nothing that’s strongly related to 3D and visual art creation / game design, even though 3D work might involve the placement of text). There’s just cases out there where a task is better suited to another standalone program (I know you really like Blender but it’s unrealistic for it to become the app. to end all apps.).

actually, I was talking about a ‘side project’ for people who may be able to do it.

This is the nexus of open source coding of animation software.

If it’s going to be a standalone app., then I don’t think its news that would fit in the Blender & CG Discussions forum.

Blender game stand alone is GPL and has blender code in it,

it is an ‘extension’ of Blender, and it would be a really nice thing to do for humanity.

Made with blender code, by blender coders.

also Frankly it’s the kind of move that can get buckets of money dumped on the BF and Ton.

Doing good, with open source software.

The BGE ties strongly into the scope of Blender as a 3D program, the scope extension you propose has a rather indirect link at best.

By all means, a standalone program made for teaching can directly borrow Blender’s tracking code even so long as the application is GPL and the developer knows how to integrate the algorithms. That also means you have a UI along with usability code optimized for that purpose instead of trying to shoehorn it into a 3D production suite.

Just because it can ‘theoretically’ be done doesn’t mean it should. You would never find someone developing power point creation tools in LMMS for instance (even though you could do whatever you want with the code). I think your idea could indeed be put to good use, but Blender may not be the most appropriate vehicle for that.

I don’t see why you’d use blender for this, seems like a solution looking for a problem.

The right way to approach this would be to use some sort of computer vision library, like OpenCV.

Not trying to sound negative or anything - it’s a nice idea - but blender just isn’t the right tool for this

This is not a small problem. It is very difficult. And, in fact, some work has already been done on this front… and using Blender. There was even a lightning talk about it at the 2014 Blender Conference (the video should jump to 59:04).

Short answer: it’s possible, but very, very difficult.

This is the question that really needs answering here.

Blender is an open-source software and a very popular one at that but it’s focus is on 3D content creation which is a far cry from communication. Just because a software is open-source doesn’t mean that it can (or should) do everything.

Don’t get me wrong here. I am all for helping children (and adults) with various disabilities communicate but I just don’t think that Blender is the right software to do that.

To his credit, BPR asks the right question, but gives the wrong answer. Blender is used in all kinds of interesting fields that it wasn’t (and isn’t) intentionally designed for… especially research. Medicine, astrophysics, education, biological research, computer vision, and more have all benefited from the fact that Blender provides a full-featured, robust foundation for 3D display and editing. This is one of the strengths and powers of being open source and should not be so readily dismissed.

As I mentioned in my last post to this thread, a little bit of work has already been done on the specific kind of work BPR is talking about. However, it’s an obscenely complex problem that crosses multiple disciplines. If BPR wants to make a tool like this, he would need to stop anything else he’s doing on the side and expect to focus at least the next decade[-ish] of his life on developing it.

And this here is where the problem lies. Ignoring the obvious problems with BPR specifically being able to knuckle-down on a hard task like this for any reasonable length of time, the issue is that this forum is named & for Blender Artists. The issue he discusses has no real artistic angle or merit and it primarily a task for engineering & language experts.

The talk Fweeb mentioned addressed the easier part of the equation - generation of animations given text & a database of sign-language word/spelling poses/animations. The far harder part (for which there is no artistic element) is the recognition of sign language from markerless camera tracking data. This is the kind of thing that you need tenure to pursue.

If BPR is serious about such a thing being done, I’d actually suggest he try raising it with Microsoft Research. Whilst I know some will have a knee-jerk reaction against approaching Microsoft for anything, they do put a great deal of money into such pie-in-the-sky graphics research (with some interesting publications & tools as a result) and, more importantly, it is clearly something they can milk PR from in regards to their “computing helps children succeed” angle they’re pushing in advertising. Also it puts their XBox cameras with depth-mapping up front & centre which, like it or not, are a pretty good tool for extracting body animations without markers.

I seem to remember quite a long time ago there was someone looking for Blender to be adapted for a person with limited motor skills or some such. It might have been at a time the forum was being hacked because I think that thread got lost when it was up again and I don’t recall it progressed any further than the initial approach. Anyhow some adaption probably could be made in an accessibility branch but it requires a highly skilled volunteer(s) to do it. Perhaps a university student could pick up the challenge to do some aspect in conjunction with their studies or even it could attract an annual grant from someone like our old friend the Dutch government? At the moment Blenders attention seems to have be commandeered to make feature movies and the philanthropic aspect has gone out of consciousness. Personally I would like to see Blender be more inclusive of others again and for sure not forgetting those less fortunate than ourselves. I hope that such a reconsideration of community can be conveyed to Ton in person by someone, as busy as he is with his aspirations. Perhaps Bart of Blender Nation could do that as he is friendly with Ton and was quite into the charity aspect at one time IIRC :wink: Tech type utilizers of Blender because it is a uniquely open platform would probably appreciate a good word being put in on their behalf at the same time :smiley: Rich or poor, near or far, able or not, artistic or just ingenious IMO there is a place for you in the Blender sphere. I would be very disappointed to hear from our chairman that Blender resources have become so focussed on studio activities that it is now exclusive of other apparently unrelated interests and unaccommodating of other levels of ability than expert or elite. Perhaps something for accessibility can be encompassed by the stripped down version of Blender being mooted for school use… just an idea… of course coloured wireframes should appear everywhere without discrimination or justification :yes:

Who would hold the tablet?

Text to speech is on every phone. Speech to text is on every phone.

A child can carry a phone with the app and still communicate with everyone.

Carrying around an tablet is no solution either. It gets heavy.

I hope you finds your solution to this problem.

the idea is kids who speak sign could set it down on something, sign and speech comes out, but your correct that text to exists, and almost everyone can read, but it would be nice for her to be able to speak however is most comfortable to her.

maybe some sort of leap motion rig could do it that you wear on your chest?

hm i tried to help people before here who where on a similar track, (as i like people who do noble things with computers)
and i think your the 3th now who is thinking about this. (the number of people i helped), but so far no one realy managed it.
But if i can help i am an open ear to you to.

Can you explain to me what you want, because its not realy clear from start what you want blender to do.
And what you are codewise planning to do.

Is it about creating sign language on video ?.
Text to speach is realy simple if your on windows. (even possible with vbscript or powershell or c#)
Creating a database of signs (ea mocap, or BHV) shouldnt be that hard; but it would be a lot of work.
There allready exist kinect cam to bhv tools (even freeware)… so that would be filling a database.
But one would also need deaf people willing to manually help you out on animations.

I hope your into python as that would be required too.
You need a way of extracting srt >text at the right moment > lookup database for the word > animate

This realy can be done, and your not the first.
It would be wise to try to find contacts in the audio disabled “group” of people for support; maybe find a python coder there or so
Or someone willing to do all the mocap work.

And make more clear what you want to do.
In the and for things like this Blender is just Python controlable.
So devide your big problems into small ones, and create a team to solve all those small problems and you endup with what you want.

No, I mean people who speek sign could gesture and somehow capture the postures/gestures and spit out speech, like a spign language interpreter,

maybe a kinect would work better,

my daughter speeks sign language, and sometime its so fast, and I don’t understand, I only know like 100 words, but she knows more.

it would be neat if she could spit out speech by signing,

Why not 3D print the sign languages?

Check out this article: http://3dprintingindustry.com/2015/11/11/high-school-student-3d-prints-gamified-sign-language-learning-tool/

Reem
MakePrintable

I personally have no problem with someone who is indirectly dealing with autism to open a discussion about an idea that might help many such people. However, it is an extremely difficult problem of which Blender would only be a small part.

I would pose the idea to entrepreneureal web-sites like AngelList (http://angel.co(sic).) You need a diverse range of software engineers, educators, physicians, and … money. Lots of money. “Angels” have lots of money, though, and a desire to do good with it (and make more money in the process).