Stream segregation by frequency
Miller and Heise (1950) created sequences of two alternating tones with different frequencies played at five tones per second. When frequency difference between the tones was small, participants heard one melody with tones going up and down in pitch (a trill). When the frequency separation was greater, participants heard two melodies with different pitches. Large pitch differences caused tones to segregate.
Bregman and Campbell (1971) had participants listen to sequences composed of 3 high-frequency tones (2500, 2000, 1600 Hz) and 3 low frequency tones (550, 430, and 350 Hz) played at 10 tones per second. The sequences perceptually segregated into two groups: one composed of high-frequency tones and one composed of low-frequency tones. Participants accurately judged the order of tones within the same stream but not the order of tones in different streams.
Stream segregation by timbre
Wessel (1979) found that tones with different spectral components formed two streams, and tones with closer spectral distance formed a single stream. However differences in onset transients of sounds did not influence streaming. I’m not sure in this experiment how pitch was accounted for – whether stream segregation occurred when different timbres were played with similar pitches.
Van Noorden (1975) determined that tones with the same pitch but different harmonics segregate. A tone composed of the third, fourth, and fifth harmonics of its fundamental frequency segregate from a tone composed of the eighth, ninth, and tenth harmonics of the same fundamental frequency. I’m not sure if the fundamental was included in the pitches. Van Noorden also found that tones with different band-pass filters applied (different spectra) formed different streams whether or not the notes had the same pitch.
Singh (1987) found that differences in spectra caused streaming as much as differences in pitch.
As in Wessel, results from Hartmann and Johnson (1991) suggest that amplitude envelope differences (onset differences) of tones do not affect streaming. Iverson (1995) finds different results with respect to dynamic components of sound (onset and spectral flux) – that is, his study shows that these things do affect stream segregation.
Applying this to Music Composition
These findings suggest to me that with sufficient timbral and/or pitch separation between two musical streams played simultaneously, the harmony and rhythmic order of tones may be less important as the two streams will be perceived as distinct from one another. Similarly, by adjusting the timbral and/or pitch separation of two streams of music played simultaneously, there is an increased perception of a single stream of music, and consequently, dissonances between the two streams become much more apparent.
If I wanted to combine two songs together, or separate a song into two different songs, I could use stream segregation by frequency to transpose part of the song to a higher pitch. Separating a song into two different songs involves finding a song in which song notes belong to another different song while the remaining notes belong to still another song. Combining two songs into a single song involves something similar, but this is more difficult because the two songs aren’t already organized in time to not overlap in strange ways. Although with a little work that overlap might make a better remix than simply separating a single song into two new songs. What considerations are there in combining two songs?