-
Notifications
You must be signed in to change notification settings - Fork 77
Cosmic integration edits to compute WDWD merger rates #1368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Cosmic integration edits to compute WDWD merger rates #1368
Conversation
Merging fast cosmic integration scripts
|
Hi Melanie, thanks for the contribution! Do you have a dataset (or command that produces a dataset) that works to test this change? I did a small population run with and then ran cosmic integration with but got which I guess means there are no merging WDWD binaries in my sample. Perhaps we could add a check that What do you assume about Pdet for WDWD binaries? Presumably this should be 0, since WDWD binaries are not sources for ground-based GW detectors like LIGO. It wasn't clear to me whether the current code would do this. Is the science case here things like comparing to the SN 1a rate as a function of redshift? (like Figure 3/4 in Eldridge et al. 2018) We shouldn't comment out lines we are not using any more, those should be deleted. The previous behaviour is saved in the git history, so we can always go back to see what we were doing before. Should we consider enabling rate calculations for other DCO binaries (BH-WD, NS-WD etc) as well? These may be useful to someone. This could also be left to a future update. |
|
@avivajpeyi fyi |
|
Hello Simon! Thank you for reviewing everything and sorry for my delayed reply! To combat the lack of WDWDs in your stimulation I (and Lieke) would suggest adding the flag "--include-WD-binaries-as-DCO: True" and sampling with low enough M1 (i.e. of 0.9 Also, yes I agree that a check or a print statement alerting the user when there are none of the specific DCOs that they are masking for would be very helpful! I will implement that soon. Yes! We are trying to get a feel for the SNe Ia rates as a function of redshift. I have attached below examples of the WDWD merger rates with a run of I can add an error statement that warns people that the WDWD systems they are masking for are not relevant for LVK observations and making it clear that intrinsic rates are only being computed. Also, I will add the two masks for BH-WD and NS-WD systems because we were thinking of doing that soon as well! (Will also be sure to delete any comment lines – sorry about that!) |
ilyamandel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a python coder, so I hope someone else (@SimonStevenson ? @jeffriley ? ) can do a proper code review. That said, here are a few queries:
"BHNS": np.logical_and(np.isin(stellar_type_1,[13,14]),np.isin(stellar_type_2,[13,14])),
Won't this flag NS-NS and BH-BH binaries too, not just NS+BH ones? Cf. original code
"BHNS": np.logical_or(np.logical_and(stellar_type_1 == 14, stellar_type_2 == 13), np.logical_and(stellar_type_1 == 13, stellar_type_2 == 14))
While adding WDWD, do you want to also add WDNS and WDBH, or at least WDCO?
type_masks["CHE_BBH"] = np.logical_and(self.CHE_mask, type_masks["BHBH"]) if types == "CHE_BBH" else np.repeat(False, len(dco_seeds))
You changed BBH to BHBH in some places but not others -- I find this mixing rather more confusing / more likely to lead to typos in future development (i.e., I don't mind if you prefer BBH or BHBH, but suggest you pick one and stick with it)
from compas_python_utils.cosmic_integration import ClassCOMPAS
import ClassCOMPAS
There are a variety of places in different files where some of the import statements were commented out and replaced. If the new imports are preferred (why? see above about my not being a python coder), just delete the old ones: commented out code lines bloat the code and make it harder to read.
-
Repeated words in text:
"will mask binaries that binaries that experience a CEE while on the HG"
The number of merging BHBHs that need a weight
I think this comment should refer to DCOs, not just BHBHs
I have not had time to check the new normalisation code, but I am concerned by the statement that it doesn't agree with a Monte Carlo calculation. I think that's a necessary test, and it would be good to understand why they disagree.
|
Thanks Simon and Ilya for helping reviewing this :)
I added a few lines in
Re: @ilyamandel 's points
|
|
Thanks, @LiekeVanSon ! If you don't find the issue with 7, let me know and I'll take a look. Can you confirm that Monte Carlo and the new analytical calculation agree when the binary fraction is fixed (not mass-dependent)? |
|
I can confirm the answers agree when fbin is constant (but have yet to look into the difference when fbin is varied) |
|
Hi @melanie-santiago26 , @LiekeVanSon ; Thanks for the changes! Thanks for the pointer about I ran a small population and everything worked as expected. For reference, I used: and post-processed it with I think there is still a problem with computing detection rates for WD populations. The detection rate panel in the default plot shows a non-zero value. It seems that a warning is printed in |
|
Hi @LiekeVanSon , @melanie-santiago26 -- it's been quite a while since the last round of comments on this. Have you had any luck with resolving these issues? |
…slow and error prone, just directly call get_compas_fraction
…called consistently with same m1_min and max etc
|
Hi @ilyamandel Sorry for leaving this so long! I have tried to approach this from a few angles, but I must admit I'm a bit stuck: the analytical an numerical calculations don't agree with each other when the f_bin is variable Specifically: However, when f_bin is variable (will activate when f_bin = None) the “numerical” and analytical functions disagree.
I thought that the issue was that the quad in I also tried to start from scratch just using MC sampler to create a mock Universe (see notebook in zip file I must admit I didn't spend a whole lot of time de-bugging the latter MC code, because I now suspect I made a mistake in the analytical calculation after all...! Also apologies in advance for an (extra) slow response since I am transitioning to Radboud and taking a lot of offline time in August :) |
|
Could there be a bug the "old analytical" function? For refernce, the old version from this commit def analytical_star_forming_mass_per_binary_using_kroupa_imf(
m1_max, m1_min, m2_min, fbin=1., imf_mass_bounds=[0.01,0.08,0.5,200]
):
"""
Analytical computation of the mass of stars formed per binary star formed within the
[m1 min, m1 max] and [m2 min, ..] rage,
using the Kroupa IMF:
p(M) \propto M^-0.3 for M between m1 and m2
p(M) \propto M^-1.3 for M between m2 and m3;
p(M) = alpha * M^-2.3 for M between m3 and m4;
@Ilya Mandel's derivation
"""
m1, m2, m3, m4 = imf_mass_bounds
alpha = (-(m4**(-1.3)-m3**(-1.3))/1.3 - (m3**(-0.3)-m2**(-0.3))/(m3*0.3) + (m2**0.7-m1**0.7)/(m2*m3*0.7))**(-1)
# average mass of stars (average mass of all binaries is a factor of 1.5 larger)
m_avg = alpha * (-(m4**(-0.3)-m3**(-0.3))/0.3 + (m3**0.7-m2**0.7)/(m3*0.7) + (m2**1.7-m1**1.7)/(m2*m3*1.7))
# fraction of binaries that COMPAS simulates
fint = -alpha / 1.3 * (m1_max ** (-1.3) - m1_min ** (-1.3)) + alpha * m2_min / 2.3 * (m1_max ** (-2.3) - m1_min ** (-2.3))
# mass represented by each binary simulated by COMPAS
m_rep = (1/fint) * m_avg * (1.5 + (1-fbin)/fbin)
return m_repThe new version in this PR: def analytical_star_forming_mass_per_binary_using_kroupa_imf(
m1_min, m1_max, m2_min, fbin=1., imf_mass_bounds=[0.01,0.08,0.5,200]
):
"""
Analytical computation of the mass of stars formed per binary star formed within the
[m1 min, m1 max] and [m2 min, ..] rage, using the Kroupa IMF:
p(M) \propto M^-0.3 for M between m1 and m2
p(M) \propto M^-1.3 for M between m2 and m3;
p(M) = alpha * M^-2.3 for M between m3 and m4;
m1_min, m1_max are the min and max sampled primary masses
m2_min is the min sampled secondary mass
This function further assumes a flat mass ratio distribution with qmin = m2_min/m1, and m2_max = m1_max
Lieke base on Ilya Mandel's derivation
"""
# Kroupa IMF
m1, m2, m3, m4 = imf_mass_bounds
continuity_constants = [1./(m2*m3), 1./(m3), 1.0]
IMF_powers = [-0.3, -1.3, -2.3]
if m1_min < m3:
raise ValueError(f"This analytical derivation requires IMF break m3 < m1_min ({m3} !< {m1_min})")
if m1_min > m1_max:
raise ValueError(f"Minimum sampled primary mass cannot be above maximum sampled primary mass: m1_min ({m1_min} !< m1_max {m1_max})")
if m1_max > m4:
raise ValueError(f"Maximum sampled primary mass cannot be above maximum mass of Kroupa IMF: m1_max ({m1_max} !< m4 {m4})")
# normalize IMF over the complete mass range:
alpha = (-(m4**(-1.3)-m3**(-1.3))/1.3 - (m3**(-0.3)-m2**(-0.3))/(m3*0.3) + (m2**0.7-m1**0.7)/(m2*m3*0.7))**(-1)
# we want to compute M_stellar_sys_in_universe / N_binaries_in_COMPAS
# = N_binaries_in_universe/N_binaries_in_COMPAS * N_stellar_sys_in_universe/N_binaries_in_universe * M_stellar_sys_in_universe/N_stellar_sys_in_universe
# = 1/fint * 1/fbin * average mass of a stellar system in the Universe
# fint = N_binaries_in_COMPAS/N_binaries_in_universe: fraction of binaries that COMPAS simulates
fint = -alpha / 1.3 * (m1_max ** (-1.3) - m1_min ** (-1.3)) + alpha * m2_min / 2.3 * (m1_max ** (-2.3) - m1_min ** (-2.3))
# Next for N_stellar_sys_in_universe/N_binaries_in_universe * M_stellar_sys_in_universe/N_stellar_sys_in_universe
# N_stellar_sys_in_universe/N_binaries_in_universe = the binary fraction
# fbin edges and values are chosen to approximately follow Figure 1 from Offner et al. (2023)
binary_bin_edges = [m1, 0.08, 0.5, 1, 10, m4]
if fbin == None:
# use a binary fraction that varies with mass
binaryFractions = [0.1, 0.225, 0.5, 0.8, 1.0]
else:
# otherwise use a constant binary fraction
binaryFractions = [fbin] * 5
# M_stellar_sys_in_universe/N_stellar_sys_in_universe = average mass of a stellar system in the Universe,
# we are computing 1/fbin * M_stellar_sys_in_universe/N_stellar_sys_in_universe, skipping steps this leads to:
# int_A^B (1/fb(m1) + 0.5) m1 P(m1) dm1.
# This is a double piecewise integral, i.e. pieces over the binary fraction bins and IMF mass bins.
piece_wise_integral = 0
# For every binary fraction bin
for i in range(len(binary_bin_edges) - 1):
fbin = binaryFractions[i] # Binary fraction for this range
# And every piece of the Kroupa IMF
for j in range(len(imf_mass_bounds) - 1):
exponent = IMF_powers[j] # IMF exponent for these masses
# Check if the binary fraction bin overlaps with the IMF mass bin
if binary_bin_edges[i + 1] <= imf_mass_bounds[j] or binary_bin_edges[i] >= imf_mass_bounds[j + 1]:
continue # No overlap
# Integrate from the most narrow range
m_start = max(binary_bin_edges[i], imf_mass_bounds[j])
m_end = min(binary_bin_edges[i + 1], imf_mass_bounds[j + 1])
# Compute the definite integral:
integral = ( m_end**(exponent + 2) - m_start**(exponent + 2) ) / (exponent + 2) * continuity_constants[j]
# Compute the sum term
sum_term = (1 /fbin + 0.5) * integral
piece_wise_integral += sum_term
# combining them:
Average_mass_stellar_sys_per_fbin = alpha * piece_wise_integral
# Now compute the average mass per binary in COMPAS M_stellar_sys_in_universe / N_binaries_in_COMPAS
M_sf_Univ_per_N_binary_COMPAS = (1/fint) * Average_mass_stellar_sys_per_fbin
return M_sf_Univ_per_N_binary_COMPAS |





Added a bool for WDWD systems in Class COMPAS so that you can calculate the WDWD merger rates.
Added two flags (
keep_pessimistic_CEE,keepRLOF_postCE) to fastcosmicintegration.pyUpdated the analytical calulation of the star forming mass to allow for a varying f_binary fraction with mass now that f_binary is None, piecewise constant value for f_binary will be used reflecting Offner 2023.
Analytical derivation of the function
analytical_star_forming_mass_per_binary_using_kroupa_imfis shown in the attached notebook: Test_fbinary_perM.zipWarning - There seems to be a small discrepancy between MC sampled average and analytical function (but I think the MC sampler might be wrong).