Zobrazeno 1 - 10
of 242
pro vyhledávání: '"Wang-Sheng Yu"'
Publikováno v:
Sustainable Environment Research, Vol 34, Iss 1, Pp 1-11 (2024)
Abstract Decentralized wastewater reclamation and reuse systems have drawn much attention due to their capability for reducing the energy demand for water conveyance and reclaiming wastewater for local re-use. While membrane bioreactor (MBR) stands a
Externí odkaz:
https://doaj.org/article/d8412a0d1fd544f39b1caea263e2136d
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image. We can define "influence" by saying that, for a given output, if a model is retrained from scratch without tha
Externí odkaz:
http://arxiv.org/abs/2406.09408
Art reinterpretation is the practice of creating a variation of a reference work, making a paired artwork that exhibits a distinct artistic style. We ask if such an image pair can be used to customize a generative model to capture the demonstrated st
Externí odkaz:
http://arxiv.org/abs/2405.01536
X-ray absorption spectroscopy (XAS) is widely employed for structure characterization of graphitic carbon nitride (g-C$_3$N$_4$) and its composites. Nevertheless, even for pure g-C$_3$N$_4$, discrepancies in energy and profile exist across different
Externí odkaz:
http://arxiv.org/abs/2403.09115
X-ray photoelectron spectroscopy (XPS) is an important characterization tool in the pursuit of controllable fluorination of two-dimensional hexagonal boron nitride ($h$-BN). However, there is a lack of clear spectral interpretation and seemingly conf
Externí odkaz:
http://arxiv.org/abs/2310.16229
While large text-to-image models are able to synthesize "novel" images, these images are necessarily a reflection of the training data. The problem of data attribution in such models -- which of the images in the training set are most responsible for
Externí odkaz:
http://arxiv.org/abs/2306.09345
Autor:
Kumari, Nupur, Zhang, Bingliang, Wang, Sheng-Yu, Shechtman, Eli, Zhang, Richard, Zhu, Jun-Yan
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability. However, these models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed imag
Externí odkaz:
http://arxiv.org/abs/2303.13516
Autor:
Lu, Daohan, Wang, Sheng-Yu, Kumari, Nupur, Agarwal, Rohan, Tang, Mia, Bau, David, Zhu, Jun-Yan
The growing proliferation of customized and pretrained generative models has made it infeasible for a user to be fully cognizant of every model in existence. To address this need, we introduce the task of content-based model search: given a query and
Externí odkaz:
http://arxiv.org/abs/2210.03116
Deep generative models make visual content creation more accessible to novice users by automating the synthesis of diverse, realistic content based on a collected dataset. However, the current machine learning approaches miss a key element of the cre
Externí odkaz:
http://arxiv.org/abs/2207.14288
Can a user create a deep generative model by sketching a single example? Traditionally, creating a GAN model has required the collection of a large-scale dataset of exemplars and specialized knowledge in deep learning. In contrast, sketching is possi
Externí odkaz:
http://arxiv.org/abs/2108.02774