503

micdotcom:

America’s best states for raising black children have virtually no black children

In April, the Annie E. Casey Foundation published a study measuring the best and worst U.S. states for raising black kids. The researchers took 12 statistical metrics, from “babies born at normal birth weight” to “young adults ages 19 to 26 who are in school or working,” and made an index showing the educational, financial and career prospects for the typical black child in that state.

The results aren’t pretty | Follow micdotcom

And they’re right… The results aren’t pretty

We have got to correct this.

7060
Reblogged from Black Scientists and Inventors
baestheticsss:

thoughtsofablackgirl:

These handsome guys are from Meharry Medical College.
Here’s Some Facts About The School
Top-ten producer of African-American Ph.D.s in Biomedical Sciences
Leading producer of African-American dentists in U.S.
Meharry is the second largest educator of African-American medical doctors and dentists in the United States.
Meharry was the first medical school in the South for African Americans. 
Maharry is currently the largest private historically black institution (HBCU) in the United States dedicated to educating healthcare professionals and scientists.

This makes me so happy

baestheticsss:

thoughtsofablackgirl:

These handsome guys are from Meharry Medical College.

Here’s Some Facts About The School

Top-ten producer of African-American Ph.D.s in Biomedical Sciences

Leading producer of African-American dentists in U.S.

Meharry is the second largest educator of African-American medical doctors and dentists in the United States.

Meharry was the first medical school in the South for African Americans. 

Maharry is currently the largest private historically black institution (HBCU) in the United States dedicated to educating healthcare professionals and scientists.

This makes me so happy

528
Reblogged from We Help You Draw
littlelimpstiff14u2:











[ … nordlys ]


D-P Photography
Spectacular reflections of the Aurora Borealis over the frozen waters of Flakstadøya island in the Lofoten archipelago

littlelimpstiff14u2:

D-P  Photography

[ … nordlys ]

Spectacular reflections of the Aurora Borealis over the frozen waters of Flakstadøya island in the Lofoten archipelago
20

"We need to talk…"

The Dancing Traffic Light

207342

Claude Monet » Water Lilies 

I saw “Water Lillies” as a child and it changed everything I felt about art and creativity.

Take your kids to the museum.

(Source: detailsdetales)

81
Reblogged from Neuroscience
neurosciencestuff:

Researchers make new discovery about brain’s 3-D shape processing
While previous studies of the brain suggest that processing of objects and places occurs in very different locations, a Johns Hopkins University research team has found that they are closely related.
In research funded by the National Institutes of Health and published today in the journal Neuron, a team led by Johns Hopkins researcher Charles E. Connor reports that a major pathway long associated with object shape also carries information about landscapes and other environments.
Siavash Vaziri, then a biomedical engineering graduate student and now a post-doctoral fellow in the Connor lab, studied how neurons in the ventral visual pathway of the monkey brain respond to 3-D images. In one channel of the ventral pathway, neurons responded to small, discrete objects as expected. But in a neighboring, parallel channel, the researchers were surprised by the overwhelming responsiveness of many neurons to large-scale environments that surround the viewer, extending beyond the field of view.
"We were entirely surprised ourselves," said Connor, senior author of the paper. "Based on decades of research, we expected that all neurons in the ventral pathway would be primarily concerned with objects."
The ventral pathway is one of the two major branches of high-level visual processing in humans and other primates. It is sometimes called the “what” pathway, based on its role in identifying objects based on their shapes and colors.
"Dr. Vaziri’s finding is exciting because it puts environmental shape information together with object shape information in two densely connected neighboring channels. This could be a site for integrating object information into environmental contexts in order to understand scenes," Connor said.
Vaziri used microelectrodes to study how individual neurons responded to a large variety of 3-D shapes projected onto a large screen. Depth structure was conveyed by shading, texture gradients, and stereopsis, the effect used in 3-D movies. The shape stimuli evolved during the experiment based on the neuron’s responses, sometimes in the direction of small objects near the viewer, sometimes in the direction of environments filling the screen and surrounding the viewer.
Connor, a professor of neuroscience and the director of the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins, is a noted expert on the neural mechanisms of object vision. His research focuses on deciphering the algorithms that make object vision possible and explain the nature of visual experience.
"Many people would say that vision is our richest and most vivid experience," said Connor. "We want to understand the brain events that create that experience."
Connor said that the next step will be to understand how object and environment information are integrated between the two channels.
"We don’t typically experience objects in isolation," Connor said. "We experience scenes, that is, environments containing multiple objects. We now think that the ventral pathway may be where all that information gets put together to create scene understanding."

neurosciencestuff:

Researchers make new discovery about brain’s 3-D shape processing

While previous studies of the brain suggest that processing of objects and places occurs in very different locations, a Johns Hopkins University research team has found that they are closely related.

In research funded by the National Institutes of Health and published today in the journal Neuron, a team led by Johns Hopkins researcher Charles E. Connor reports that a major pathway long associated with object shape also carries information about landscapes and other environments.

Siavash Vaziri, then a biomedical engineering graduate student and now a post-doctoral fellow in the Connor lab, studied how neurons in the ventral visual pathway of the monkey brain respond to 3-D images. In one channel of the ventral pathway, neurons responded to small, discrete objects as expected. But in a neighboring, parallel channel, the researchers were surprised by the overwhelming responsiveness of many neurons to large-scale environments that surround the viewer, extending beyond the field of view.

"We were entirely surprised ourselves," said Connor, senior author of the paper. "Based on decades of research, we expected that all neurons in the ventral pathway would be primarily concerned with objects."

The ventral pathway is one of the two major branches of high-level visual processing in humans and other primates. It is sometimes called the “what” pathway, based on its role in identifying objects based on their shapes and colors.

"Dr. Vaziri’s finding is exciting because it puts environmental shape information together with object shape information in two densely connected neighboring channels. This could be a site for integrating object information into environmental contexts in order to understand scenes," Connor said.

Vaziri used microelectrodes to study how individual neurons responded to a large variety of 3-D shapes projected onto a large screen. Depth structure was conveyed by shading, texture gradients, and stereopsis, the effect used in 3-D movies. The shape stimuli evolved during the experiment based on the neuron’s responses, sometimes in the direction of small objects near the viewer, sometimes in the direction of environments filling the screen and surrounding the viewer.

Connor, a professor of neuroscience and the director of the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins, is a noted expert on the neural mechanisms of object vision. His research focuses on deciphering the algorithms that make object vision possible and explain the nature of visual experience.

"Many people would say that vision is our richest and most vivid experience," said Connor. "We want to understand the brain events that create that experience."

Connor said that the next step will be to understand how object and environment information are integrated between the two channels.

"We don’t typically experience objects in isolation," Connor said. "We experience scenes, that is, environments containing multiple objects. We now think that the ventral pathway may be where all that information gets put together to create scene understanding."

48

divalocity:

Back Stage Beauty: Nyamuoch Girwath for Rosie Assoulin SS 2015 RTW

Photos Credit: Nicole Cohen and Courtney Velasco  

(Source: sketch42blog.com)

74
Reblogged from Neuroscience
neurosciencestuff:

Neurons express ‘gloss’ using three perceptual parameters
Japanese researchers showed monkeys a number of images representing various glosses and then they measured the responses of 39 neurons by using microelectrodes. They found that a specific population of neurons changed the intensities of the responses linearly according to either the contrast-of-highlight, sharpness-of-highlight, or brightness of the object. This shows that these 3 perceptual parameters are used as parameters when the brain recognizes a variety of glosses. They also found that different parameters are represented by different populations of neurons. This was published in the Journal of Neuroscience.
The gloss of an object surface provides information about the condition of that object. For instance, whether it is wet or dry, whether food is fresh or old. Several gloss-related physical parameters such as specular reflectance and diffuse reflectance have been described and used in computer graphics so far. However, the parameters used when neurons respond to gloss have not yet been found.
A Japanese research group led by Hidehiko Komatsu, professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), in collaboration with the Advanced Telecommunications Research Institute International (ATR) prepared 16 images representing various glosses and showed them to monkeys. In a circumscribed area in the inferior temporal cortex of the brain, neurons strengthened their responses proportionately as the contrast-of-highlight and/or sharpness-of-highlight got higher. Neural responses also vary greatly depending on the brightness, for instance, whether the object is black, gray, or white. Furthermore, the perceptual gloss parameters of the presented image could be fairly precisely predicted from the strengths of the population neural responses.
By the application of these findings in an artificial image recognition system, the researchers are expecting that it would be able to develop robots that recognize gloss like humans.

How the brain processes glossy or shiny objects…

neurosciencestuff:

Neurons express ‘gloss’ using three perceptual parameters

Japanese researchers showed monkeys a number of images representing various glosses and then they measured the responses of 39 neurons by using microelectrodes. They found that a specific population of neurons changed the intensities of the responses linearly according to either the contrast-of-highlight, sharpness-of-highlight, or brightness of the object. This shows that these 3 perceptual parameters are used as parameters when the brain recognizes a variety of glosses. They also found that different parameters are represented by different populations of neurons. This was published in the Journal of Neuroscience.

The gloss of an object surface provides information about the condition of that object. For instance, whether it is wet or dry, whether food is fresh or old. Several gloss-related physical parameters such as specular reflectance and diffuse reflectance have been described and used in computer graphics so far. However, the parameters used when neurons respond to gloss have not yet been found.

A Japanese research group led by Hidehiko Komatsu, professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), in collaboration with the Advanced Telecommunications Research Institute International (ATR) prepared 16 images representing various glosses and showed them to monkeys. In a circumscribed area in the inferior temporal cortex of the brain, neurons strengthened their responses proportionately as the contrast-of-highlight and/or sharpness-of-highlight got higher. Neural responses also vary greatly depending on the brightness, for instance, whether the object is black, gray, or white. Furthermore, the perceptual gloss parameters of the presented image could be fairly precisely predicted from the strengths of the population neural responses.

By the application of these findings in an artificial image recognition system, the researchers are expecting that it would be able to develop robots that recognize gloss like humans.

How the brain processes glossy or shiny objects…