Most of its clients are US law enforcement agencies who use its facial-recognition software to identify suspects.
Its use of images scraped from the internet has raised privacy concerns.
The company told BBC News: "Security is Clearview's top priority.
"Unfortunately, data breaches are part of life in the 21st Century.
"Our servers were never accessed.
"We patched the flaw and continue to work to strengthen our security."
But it added a report in the Daily Beast an intruder had gained unauthorised access to its lists of customers was "correct".
Tim Mackey, principal analyst with security company Synopsys said: "While their attorney rightly states that data breaches are a fact of life in modern society, the nature of Clearview AI's business makes this type of attack particularly problematic.
"Facial-recognition systems have evolved to the point where they can rapidly identify an individual - but combining facial recognition data with data from other sources like social media enables a face to be placed in a context which in turn can enable detailed user profiling, all without explicit consent from the person whose face is being tracked."
Last month, a New York Times investigation revealed photos remained on Clearview AI's database even after users delete them from their social media accounts.
Twitter, YouTube and Facebook have all demanded it stop using photos on their platforms.
And US senator Ron Wyden tweeted Clearview's activities were "extremely troubling".
"Americans have a right to know whether their personal photos are secretly being sucked into a private facial-recognition database," he wrote.
But Clearview AI chief executive Hoan Ton-That told the CBS This Morning programme it was his First Amendment right to collect public photos.