<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1" xmlns:xhtml="http://www.w3.org/1999/xhtml">
  <url>
    <loc>https://www.alexberardino.com/projects</loc>
    <changefreq>daily</changefreq>
    <priority>1.0</priority>
    <lastmod>2025-04-17</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505518043925-A5CY00XNIUCADN3EVI8Q/NIPSTitle.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505656770089-8RMHXU6L40WFICQAZHEV/EvecsToThresholds.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Measuring and comparing model-derived predictions of image discriminability. Two models are applied to an image (depicted as a point x in the space of pixel values), producing response vectors r_A and r_B. Responses are assumed to be stochastic, and drawn from known distributions p(r_A|x) and p(r_B|x). The Fisher Information Matrices (FIM) of the models, J_A[x] and J_B[x], provide a quadratic approximation of the discriminability of distortions relative to an image (rightmost plot, colored ellipses). The extremal eigenvalues and eigenvectors of the FIMs (colored lines) provide predictions of the most and least visible distortions. We test these predictions by measuring human discriminability in these directions (colored points). In this example, the ratio of discriminability along the extremal eigenvectors is larger for model A than for model B, indicating that model A provides a better description of human perception.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505593995691-M5MA0FPZJYGAZZNS1ETP/3figstop.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505592369685-2B1MU205LFQXTRO0SKSX/NLPPOIR.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Normalized Laplacian Pyramid Perceptual Transform. The scene luminances S (in cd∕m2) are first transformed using a power function (top left). The transformed luminance image is then decomposed into frequency channels using the recursive implementation of the Laplacian pyramid. Each channel z k is then divided by a weighted sum of local amplitudes (computed with lowpass filter P) plus a constant σ. The final lowpass channel x N k is also normalized, but with distinct parameters (top right). Symbols ↑ and ↓ indicate upsampling and downsampling by a factor of 2, respectively.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505592548708-27CW89EGBQVR3K95M6I6/SummationModel.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Construction of the NLP Distance Measure. Two images are transformed by f ·to a perceptual representation, yielding two NLPs (see Figure above). We compute the α-norm over the vector of differences for each frequency channel, and then combine these over channels using a β-norm. For all rendering results, we use α 2.0 and β 0.6, which are optimized to fit the human perceptual ratings on distorted images.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505593578986-PG9CB6F4Y8PABLPYTKG2/3figs.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Rendering of an uncalibrated HDR image on a display with a limited luminance range. Linear mapping of luminances leads to loss of detail (top left: rescaling of luminances to the display range, assuming Smax 300 cd∕m2; top center: rescaling of luminances, assuming a more realistic value of Smax 106 cd∕m2 ). Top right: the image rendered using [23]. Bottom: the image optimized for NLPD, with different assumed maximum luminance values (Bottom left: SMAX=10^5, Bottom Center: SMAX= 10^6, Bottom Right: SMAX =10^7.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505518402852-8SUCJ4700AU1572IV3PN/JOSATitle.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505593410921-ST2XZSUDI2HWK7BMP761/yosemite.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Rendering of a calibrated HDR image on a display with a limited luminance range. The scene luminances for this image spanned the range from Smin 0.78 cd∕m2 to Smax 16;200 cd∕m2, whereas the display luminances are assumed to lie between 5 cd∕m2 and 300 cd∕m2. Left: the image rendered by linear rescaling of luminance values into the display range. Center: the image rendered using a state-of-the-art tone mapping algorithm [23]. Right: the image rendered using the proposed method of minimizing the NLPD metric subject to the display constraints.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505594595273-KBV7GGB91BZSKA1X5PDA/haze.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Example of haze removal. Left: the original image. Right: the image processed by optimizing NLPD, with Smin 5 and Smax 104.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505591896160-6M04KO0YJVOPP4NBUR9Z/Valero.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Perceptually optimized rendering framework. When we view a real-world scene, the luminances, specified by a vector S, give rise to an internal perceptual representation f(S) . While luminances in the real world can range from complete darkness (0 cd∕m2) to extremely bright (e.g., midday sun, roughly 109 cd∕m2), a typical display can generate a relatively narrow range of roughly 5 to 300 cd∕m2. The optimization goal is to adjust the luminances I generated by the display to minimize the difference between the perceptual representations f(S) and f(I) while remaining within the set of images that can be generated by the display.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505518560669-CJ0LIPFL3IIA9VTMUHAG/NLPTitle.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505573747872-VYXABBHFNH7JZJJ27400/NLPCorrFinal.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Comparison of quality metrics to human perceptual data. Each plot shows the inverse of the mean opinion score of human observers (DMOS) as a function of prediction of a quality metric, for 1700 images corrupted by different types and magnitudes of distortion.  Performance of the metric is summarized with three numbers (provided above plot): the Pearson correlation before fitting a logistic function (r1), and the Pearson correlation (r2) and the prediction error (RMSE) after fitting a logistic function (black line).</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505573013742-IMHTD09H7VACL6ECVOVN/NLP2.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Representation of an example image. X is the original image (left). Z is the decomposition of the image using the Laplacian pyramid (three scales shown), each image corresponding to a different scale. Note that the Laplacian pyramid includes downsampling in each scale. The examples shown here have been upsampled for visualization purposes. y are the corresponding locally contrast-normalized images.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505573327593-5U9VWOQXYWCK7KC42Q6I/NLP3.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Local mutual information between values and their spatial neighbors within an 11 x 11 local region.  Shown for three representations (image pixels, Laplacian pyramid sub-band, normalized Laplacian pyramid sub-band). Brightness is proportional to the mutual information between a central coefficient and the neighbor at that relative location. Values are estimated from one million image patches. The average mutual information over the whole neighborhood is given above each panel.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505572929760-DCKYVF410G2ZF23N9XL9/NLP.png</image:loc>
      <image:title>Projects</image:title>
      <image:caption>Normalized Laplacian pyramid model diagram, shown for a single scale (k). The input image at scale k, x(k) (k = 1 corresponds to the original image), is modified by subtracting the local mean (eq. 2). This is accomplished using the standard Laplacian pyramid construction: convolve with lowpass filter L(w), downsample by a factor of two in each dimension, upsample, convolve again with L(w), and subtract from the input image x(k). This intermediate image z(k) is then normalized by an estimate of local amplitude, obtained by computing the absolute value, convolving with scale-specific filter P(k)(w), and adding the scale-specific constant s (k) (eq. 3)). As in the standard Laplacian Pyramid, the blurred and downsampled image x(k+1) is the input image for scale (k + 1).</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505519523742-Q9L9OLOYXBTR2UU0OCME/CanyonImage.JPG</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505571331553-D8RXGB3GZKY07SBAN2ZV/ComSciCon.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505571152440-YWCUTI8BEAM0Z4I1R88K/NeuWriteHeader2.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505517990670-45ZI9S2SNG9SA478UIV6/WebsiteLogo.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505518448543-3AHO6MNT6HMYJ7124SX6/3figs.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505518584683-4RUPQK8HF232EFDJA8QC/NLP2.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505569482673-JZZELSVW4REG55SDTL2M/CanyonImage.JPG</image:loc>
      <image:title>Projects</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505570527541-4UCGUOYJL1IKJBJRQ62T/NeuWriteHeader.png</image:loc>
      <image:title>Projects</image:title>
    </image:image>
  </url>
  <url>
    <loc>https://www.alexberardino.com/blog</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2016-02-22</lastmod>
  </url>
  <url>
    <loc>https://www.alexberardino.com/home</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2017-09-15</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/59bc347db7411c8398790333/1505507955945-QL06MP8ABBYE2ADUZ9J3/WebsiteLogo.png</image:loc>
      <image:title>Home - Alex Berardino</image:title>
      <image:caption>New yORK uNIVERSITY</image:caption>
    </image:image>
  </url>
</urlset>

